1.
Three-dimensional space
–
Three-dimensional space is a geometric setting in which three values are required to determine the position of an element. This is the meaning of the term dimension. In physics and mathematics, a sequence of n numbers can be understood as a location in n-dimensional space, when n =3, the set of all such locations is called three-dimensional Euclidean space. It is commonly represented by the symbol ℝ3 and this serves as a three-parameter model of the physical universe in which all known matter exists. However, this space is one example of a large variety of spaces in three dimensions called 3-manifolds. Furthermore, in case, these three values can be labeled by any combination of three chosen from the terms width, height, depth, and breadth. In mathematics, analytic geometry describes every point in space by means of three coordinates. Three coordinate axes are given, each perpendicular to the two at the origin, the point at which they cross. They are usually labeled x, y, and z, below are images of the above-mentioned systems. Two distinct points determine a line. Three distinct points are either collinear or determine a unique plane, four distinct points can either be collinear, coplanar or determine the entire space. Two distinct lines can intersect, be parallel or be skew. Two parallel lines, or two intersecting lines, lie in a plane, so skew lines are lines that do not meet. Two distinct planes can either meet in a line or are parallel. Three distinct planes, no pair of which are parallel, can meet in a common line. In the last case, the three lines of intersection of each pair of planes are mutually parallel, a line can lie in a given plane, intersect that plane in a unique point or be parallel to the plane. In the last case, there will be lines in the plane that are parallel to the given line, a hyperplane is a subspace of one dimension less than the dimension of the full space. The hyperplanes of a space are the two-dimensional subspaces, that is
2.
Euclidean space
–
In geometry, Euclidean space encompasses the two-dimensional Euclidean plane, the three-dimensional space of Euclidean geometry, and certain other spaces. It is named after the Ancient Greek mathematician Euclid of Alexandria, the term Euclidean distinguishes these spaces from other types of spaces considered in modern geometry. Euclidean spaces also generalize to higher dimensions, classical Greek geometry defined the Euclidean plane and Euclidean three-dimensional space using certain postulates, while the other properties of these spaces were deduced as theorems. Geometric constructions are used to define rational numbers. It means that points of the space are specified with collections of real numbers and this approach brings the tools of algebra and calculus to bear on questions of geometry and has the advantage that it generalizes easily to Euclidean spaces of more than three dimensions. From the modern viewpoint, there is only one Euclidean space of each dimension. With Cartesian coordinates it is modelled by the coordinate space of the same dimension. In one dimension, this is the line, in two dimensions, it is the Cartesian plane, and in higher dimensions it is a coordinate space with three or more real number coordinates. One way to think of the Euclidean plane is as a set of points satisfying certain relationships, expressible in terms of distance, for example, there are two fundamental operations on the plane. One is translation, which means a shifting of the plane so that point is shifted in the same direction. The other is rotation about a point in the plane. In order to all of this mathematically precise, the theory must clearly define the notions of distance, angle, translation. Even when used in theories, Euclidean space is an abstraction detached from actual physical locations, specific reference frames, measurement instruments. The standard way to such space, as carried out in the remainder of this article, is to define the Euclidean plane as a two-dimensional real vector space equipped with an inner product. The reason for working with vector spaces instead of Rn is that it is often preferable to work in a coordinate-free manner. Once the Euclidean plane has been described in language, it is actually a simple matter to extend its concept to arbitrary dimensions. For the most part, the vocabulary, formulae, and calculations are not made any more difficult by the presence of more dimensions. Intuitively, the distinction says merely that there is no choice of where the origin should go in the space
3.
Inner product
–
In linear algebra, an inner product space is a vector space with an additional structure called an inner product. This additional structure associates each pair of vectors in the space with a quantity known as the inner product of the vectors. Inner products allow the introduction of intuitive geometrical notions such as the length of a vector or the angle between two vectors. They also provide the means of defining orthogonality between vectors, inner product spaces generalize Euclidean spaces to vector spaces of any dimension, and are studied in functional analysis. An inner product induces a associated norm, thus an inner product space is also a normed vector space. A complete space with a product is called a Hilbert space. An space with a product is called a pre-Hilbert space, since its completion with respect to the norm induced by the inner product is a Hilbert space. Inner product spaces over the field of numbers are sometimes referred to as unitary spaces. In this article, the field of scalars denoted F is either the field of real numbers R or the field of complex numbers C, formally, an inner product space is a vector space V over the field F together with an inner product, i. e. Some authors, especially in physics and matrix algebra, prefer to define the inner product, then the first argument becomes conjugate linear, rather than the second. In those disciplines we would write the product ⟨ x, y ⟩ as ⟨ y | x ⟩, respectively y † x. Here the kets and columns are identified with the vectors of V and this reverse order is now occasionally followed in the more abstract literature, taking ⟨ x, y ⟩ to be conjugate linear in x rather than y. A few instead find a ground by recognizing both ⟨ ⋅, ⋅ ⟩ and ⟨ ⋅ | ⋅ ⟩ as distinct notations differing only in which argument is conjugate linear. There are various reasons why it is necessary to restrict the basefield to R and C in the definition. Briefly, the basefield has to contain an ordered subfield in order for non-negativity to make sense, the basefield has to have additional structure, such as a distinguished automorphism. More generally any quadratically closed subfield of R or C will suffice for this purpose, however in these cases when it is a proper subfield even finite-dimensional inner product spaces will fail to be metrically complete. In contrast all finite-dimensional inner product spaces over R or C, such as used in quantum computation, are automatically metrically complete. In some cases we need to consider non-negative semi-definite sesquilinear forms and this means that ⟨ x, x ⟩ is only required to be non-negative
4.
Linear algebra
–
Linear algebra is the branch of mathematics concerning vector spaces and linear mappings between such spaces. It includes the study of lines, planes, and subspaces, the set of points with coordinates that satisfy a linear equation forms a hyperplane in an n-dimensional space. The conditions under which a set of n hyperplanes intersect in a point is an important focus of study in linear algebra. Such an investigation is initially motivated by a system of linear equations containing several unknowns, such equations are naturally represented using the formalism of matrices and vectors. Linear algebra is central to both pure and applied mathematics, for instance, abstract algebra arises by relaxing the axioms of a vector space, leading to a number of generalizations. Functional analysis studies the infinite-dimensional version of the theory of vector spaces, combined with calculus, linear algebra facilitates the solution of linear systems of differential equations. Because linear algebra is such a theory, nonlinear mathematical models are sometimes approximated by linear models. The study of linear algebra first emerged from the study of determinants, determinants were used by Leibniz in 1693, and subsequently, Gabriel Cramer devised Cramers Rule for solving linear systems in 1750. Later, Gauss further developed the theory of solving linear systems by using Gaussian elimination, the study of matrix algebra first emerged in England in the mid-1800s. In 1844 Hermann Grassmann published his Theory of Extension which included foundational new topics of what is called linear algebra. In 1848, James Joseph Sylvester introduced the term matrix, which is Latin for womb, while studying compositions of linear transformations, Arthur Cayley was led to define matrix multiplication and inverses. Crucially, Cayley used a letter to denote a matrix. In 1882, Hüseyin Tevfik Pasha wrote the book titled Linear Algebra, the first modern and more precise definition of a vector space was introduced by Peano in 1888, by 1900, a theory of linear transformations of finite-dimensional vector spaces had emerged. Linear algebra took its form in the first half of the twentieth century. The use of matrices in quantum mechanics, special relativity, the origin of many of these ideas is discussed in the articles on determinants and Gaussian elimination. Linear algebra first appeared in American graduate textbooks in the 1940s, following work by the School Mathematics Study Group, U. S. high schools asked 12th grade students to do matrix algebra, formerly reserved for college in the 1960s. In France during the 1960s, educators attempted to teach linear algebra through finite-dimensional vector spaces in the first year of secondary school and this was met with a backlash in the 1980s that removed linear algebra from the curriculum. To better suit 21st century applications, such as mining and uncertainty analysis
5.
Vector space
–
A vector space is a collection of objects called vectors, which may be added together and multiplied by numbers, called scalars in this context. Scalars are often taken to be numbers, but there are also vector spaces with scalar multiplication by complex numbers, rational numbers. The operations of addition and scalar multiplication must satisfy certain requirements, called axioms. Euclidean vectors are an example of a vector space and they represent physical quantities such as forces, any two forces can be added to yield a third, and the multiplication of a force vector by a real multiplier is another force vector. In the same vein, but in a more geometric sense, Vector spaces are the subject of linear algebra and are well characterized by their dimension, which, roughly speaking, specifies the number of independent directions in the space. Infinite-dimensional vector spaces arise naturally in mathematical analysis, as function spaces and these vector spaces are generally endowed with additional structure, which may be a topology, allowing the consideration of issues of proximity and continuity. Among these topologies, those that are defined by a norm or inner product are commonly used. This is particularly the case of Banach spaces and Hilbert spaces, historically, the first ideas leading to vector spaces can be traced back as far as the 17th centurys analytic geometry, matrices, systems of linear equations, and Euclidean vectors. Today, vector spaces are applied throughout mathematics, science and engineering, furthermore, vector spaces furnish an abstract, coordinate-free way of dealing with geometrical and physical objects such as tensors. This in turn allows the examination of local properties of manifolds by linearization techniques, Vector spaces may be generalized in several ways, leading to more advanced notions in geometry and abstract algebra. The concept of space will first be explained by describing two particular examples, The first example of a vector space consists of arrows in a fixed plane. This is used in physics to describe forces or velocities, given any two such arrows, v and w, the parallelogram spanned by these two arrows contains one diagonal arrow that starts at the origin, too. This new arrow is called the sum of the two arrows and is denoted v + w, when a is negative, av is defined as the arrow pointing in the opposite direction, instead. Such a pair is written as, the sum of two such pairs and multiplication of a pair with a number is defined as follows, + = and a =. The first example above reduces to one if the arrows are represented by the pair of Cartesian coordinates of their end points. A vector space over a field F is a set V together with two operations that satisfy the eight axioms listed below, elements of V are commonly called vectors. Elements of F are commonly called scalars, the second operation, called scalar multiplication takes any scalar a and any vector v and gives another vector av. In this article, vectors are represented in boldface to distinguish them from scalars
6.
Linear functional
–
In linear algebra, a linear functional or linear form is a linear map from a vector space to its field of scalars. The set of all linear functionals from V to k, Homk, forms a space over k with the addition of the operations of addition. This space is called the space of V, or sometimes the algebraic dual space. It is often written V∗ or V′ when the field k is understood, if V is a topological vector space, the space of continuous linear functionals — the continuous dual — is often simply called the dual space. If V is a Banach space, then so is its dual, to distinguish the ordinary dual space from the continuous dual space, the former is sometimes called the algebraic dual. In finite dimensions, every linear functional is continuous, so the dual is the same as the algebraic dual. Suppose that vectors in the coordinate space Rn are represented as column vectors x → =. For each row there is a linear functional f defined by f = a 1 x 1 + ⋯ + a n x n. This is just the product of the row vector and the column vector x →, f =. Linear functionals first appeared in functional analysis, the study of spaces of functions. Let Pn denote the space of real-valued polynomial functions of degree ≤n defined on an interval. If c ∈, then let evc, Pn → R be the evaluation functional, the mapping f → f is linear since = f + g = α f. If x0, …, xn are n+1 distinct points in, then the evaluation functionals evxi, the integration functional I defined above defines a linear functional on the subspace Pn of polynomials of degree ≤ n. If x0, …, xn are n+1 distinct points in, then there are coefficients a0, … and this forms the foundation of the theory of numerical quadrature. This follows from the fact that the linear functionals evxi, f → f defined above form a basis of the space of Pn. Linear functionals are particularly important in quantum mechanics, quantum mechanical systems are represented by Hilbert spaces, which are anti–isomorphic to their own dual spaces. A state of a mechanical system can be identified with a linear functional. For more information see bra–ket notation, in the theory of generalized functions, certain kinds of generalized functions called distributions can be realized as linear functionals on spaces of test functions
7.
Differential geometry
–
Differential geometry is a mathematical discipline that uses the techniques of differential calculus, integral calculus, linear algebra and multilinear algebra to study problems in geometry. The theory of plane and space curves and surfaces in the three-dimensional Euclidean space formed the basis for development of differential geometry during the 18th century, since the late 19th century, differential geometry has grown into a field concerned more generally with the geometric structures on differentiable manifolds. Differential geometry is related to differential topology and the geometric aspects of the theory of differential equations. The differential geometry of surfaces captures many of the key ideas, Differential geometry arose and developed as a result of and in connection to the mathematical analysis of curves and surfaces. These unanswered questions indicated greater, hidden relationships, initially applied to the Euclidean space, further explorations led to non-Euclidean space, and metric and topological spaces. Riemannian geometry studies Riemannian manifolds, smooth manifolds with a Riemannian metric and this is a concept of distance expressed by means of a smooth positive definite symmetric bilinear form defined on the tangent space at each point. Various concepts based on length, such as the arc length of curves, area of plane regions, the notion of a directional derivative of a function from multivariable calculus is extended in Riemannian geometry to the notion of a covariant derivative of a tensor. Many concepts and techniques of analysis and differential equations have been generalized to the setting of Riemannian manifolds, a distance-preserving diffeomorphism between Riemannian manifolds is called an isometry. This notion can also be defined locally, i. e. for small neighborhoods of points, any two regular curves are locally isometric. In higher dimensions, the Riemann curvature tensor is an important pointwise invariant associated with a Riemannian manifold that measures how close it is to being flat, an important class of Riemannian manifolds is the Riemannian symmetric spaces, whose curvature is not necessarily constant. These are the closest analogues to the plane and space considered in Euclidean and non-Euclidean geometry. Pseudo-Riemannian geometry generalizes Riemannian geometry to the case in which the metric tensor need not be positive-definite, a special case of this is a Lorentzian manifold, which is the mathematical basis of Einsteins general relativity theory of gravity. Finsler geometry has the Finsler manifold as the object of study. This is a manifold with a Finsler metric, i. e. a Banach norm defined on each tangent space. Riemannian manifolds are special cases of the more general Finsler manifolds. A Finsler structure on a manifold M is a function F, TM → [0, ∞) such that, F = |m|F for all x, y in TM, F is infinitely differentiable in TM −, symplectic geometry is the study of symplectic manifolds. A symplectic manifold is an almost symplectic manifold for which the symplectic form ω is closed, a diffeomorphism between two symplectic manifolds which preserves the symplectic form is called a symplectomorphism. Non-degenerate skew-symmetric bilinear forms can only exist on even-dimensional vector spaces, in dimension 2, a symplectic manifold is just a surface endowed with an area form and a symplectomorphism is an area-preserving diffeomorphism
8.
Differentiable manifold
–
In mathematics, a differentiable manifold is a type of manifold that is locally similar enough to a linear space to allow one to do calculus. Any manifold can be described by a collection of charts, also known as an atlas, one may then apply ideas from calculus while working within the individual charts, since each chart lies within a linear space to which the usual rules of calculus apply. If the charts are suitably compatible, then computations done in one chart are valid in any other differentiable chart, in formal terms, a differentiable manifold is a topological manifold with a globally defined differential structure. Any topological manifold can be given a differential structure locally by using the homeomorphisms in its atlas and the standard differential structure on a linear space. In other words, where the domains of overlap, the coordinates defined by each chart are required to be differentiable with respect to the coordinates defined by every chart in the atlas. The maps that relate the coordinates defined by the charts to one another are called transition maps. Differentiability means different things in different contexts including, continuously differentiable, k times differentiable, smooth, furthermore, the ability to induce such a differential structure on an abstract space allows one to extend the definition of differentiability to spaces without global coordinate systems. A differential structure allows one to define the globally differentiable tangent space, differentiable functions, differentiable manifolds are very important in physics. Special kinds of differentiable manifolds form the basis for theories such as classical mechanics, general relativity. It is possible to develop a calculus for differentiable manifolds and this leads to such mathematical machinery as the exterior calculus. The study of calculus on differentiable manifolds is known as differential geometry, the emergence of differential geometry as a distinct discipline is generally credited to Carl Friedrich Gauss and Bernhard Riemann. Riemann first described manifolds in his famous habilitation lecture before the faculty at Göttingen and these ideas found a key application in Einsteins theory of general relativity and its underlying equivalence principle. A modern definition of a 2-dimensional manifold was given by Hermann Weyl in his 1913 book on Riemann surfaces, the widely accepted general definition of a manifold in terms of an atlas is due to Hassler Whitney. A presentation of a manifold is a second countable Hausdorff space that is locally homeomorphic to a linear space. This formalizes the notion of patching together pieces of a space to make a manifold – the manifold produced also contains the data of how it has been patched together, However, different atlases may produce the same manifold, a manifold does not come with a preferred atlas. And, thus, one defines a manifold to be a space as above with an equivalence class of atlases. There are a number of different types of manifolds, depending on the precise differentiability requirements on the transition functions. Some common examples include the following, a differentiable manifold is a topological manifold equipped with an equivalence class of atlases whose transition maps are all differentiable
9.
Smooth function
–
In mathematical analysis, the smoothness of a function is a property measured by the number of derivatives it has which are continuous. A smooth function is a function that has derivatives of all orders everywhere in its domain, differentiability class is a classification of functions according to the properties of their derivatives. Higher order differentiability classes correspond to the existence of more derivatives, consider an open set on the real line and a function f defined on that set with real values. Let k be a non-negative integer, the function f is said to be of class Ck if the derivatives f′, f′′. The function f is said to be of class C∞, or smooth, if it has derivatives of all orders. The function f is said to be of class Cω, or analytic, if f is smooth, Cω is thus strictly contained in C∞. Bump functions are examples of functions in C∞ but not in Cω, to put it differently, the class C0 consists of all continuous functions. The class C1 consists of all differentiable functions whose derivative is continuous, thus, a C1 function is exactly a function whose derivative exists and is of class C0. In particular, Ck is contained in Ck−1 for every k, C∞, the class of infinitely differentiable functions, is the intersection of the sets Ck as k varies over the non-negative integers. The function f = { x if x ≥0,0 if x <0 is continuous, because cos oscillates as x →0, f ’ is not continuous at zero. Therefore, this function is differentiable but not of class C1, the functions f = | x | k +1 where k is even, are continuous and k times differentiable at all x. But at x =0 they are not times differentiable, so they are of class Ck, the exponential function is analytic, so, of class Cω. The trigonometric functions are also analytic wherever they are defined, the function f is an example of a smooth function with compact support. Let n and m be some positive integers, if f is a function from an open subset of Rn with values in Rm, then f has component functions f1. Each of these may or may not have partial derivatives, the classes C∞ and Cω are defined as before. These criteria of differentiability can be applied to the functions of a differential structure. The resulting space is called a Ck manifold, if one wishes to start with a coordinate-independent definition of the class Ck, one may start by considering maps between Banach spaces. A map from one Banach space to another is differentiable at a point if there is a map which approximates it at that point
10.
Section (fiber bundle)
–
In the mathematical field of topology, a section of a fiber bundle E is a continuous right inverse of the projection function π. A section is an abstract characterization of what it means to be a graph, then a graph is any function σ for which π = x. The language of fibre bundles allows this notion of a section to be generalized to the case when E is not necessarily a Cartesian product, if π, E → B is a fibre bundle, then a section is a choice of point σ in each of the fibres. The condition π = x simply means that the section at a point x must lie over x, for example, when E is a vector bundle a section of E is an element of the vector space Ex lying over each point x ∈ B. Sections, particularly of principal bundles and vector bundles, are very important tools in differential geometry. In this setting, the base space B is a smooth manifold M, in this case, one considers the space of smooth sections of E over an open set U, denoted C∞. It is also useful in analysis to consider spaces of sections with intermediate regularity. Fiber bundles do not in general have such sections, so it is also useful to define sections only locally. A local section of a bundle is a continuous map s, U → E where U is an open set in B and π = x for all x in U. If is a trivialization of E, where φ is a homeomorphism from π−1 to U × F. The sections form a sheaf over B called the sheaf of sections of E, the space of continuous sections of a fiber bundle E over U is sometimes denoted C, while the space of global sections of E is often denoted Γ or Γ. Sections are studied in homotopy theory and algebraic topology, where one of the goals is to account for the existence or non-existence of global sections. An obstruction denies the existence of global sections since the space is too twisted, more precisely, obstructions obstruct the possibility of extending a local section to a global section due to the spaces twistedness. Obstructions are indicated by particular characteristic classes, which are cohomological classes, for example, a principal bundle has a global section if and only if it is trivial. On the other hand, a vector bundle always has a global section, however, it only admits a nowhere vanishing section if its Euler class is zero. Obstructions to extending local sections may be generalized in the manner, take a topological space and form a category whose objects are open subsets. Thus we use a category to generalize a topological space and we generalize the notion of a local section using sheaves of abelian groups, which assigns to each object an abelian group. There is an important distinction here, intuitively, local sections are like vector fields on a subset of a topological space
11.
Total space
–
In mathematics, and particularly topology, a fiber bundle is a space that is locally a product space, but globally may have a different topological structure. The map π, called the projection or submersion of the bundle, is regarded as part of the structure of the bundle, the space E is known as the total space of the fiber bundle, B as the base space, and F the fiber. In the trivial case, E is just B × F, and this is called a trivial bundle. Examples of non-trivial fiber bundles include the Möbius strip and Klein bottle, Fiber bundles such as the tangent bundle of a manifold and more general vector bundles play an important role in differential geometry and differential topology, as do principal bundles. Mappings between total spaces of bundles that commute with the projection maps are known as bundle maps. A bundle map from the space itself to E is called a section of E. Fiber bundles became their own object of study in the period 1935-1940, the first general definition appeared in the works of Hassler Whitney. Whitney came to the definition of a fiber bundle from his study of a more particular notion of a sphere bundle. A fiber bundle is a structure, where E, B, the space B is called the base space of the bundle, E the total space, and F the fiber. The map π is called the projection map and we shall assume in what follows that the base space B is connected. That is, the diagram should commute, where proj1, U × F → U is the natural projection and φ. The set of all is called a trivialization of the bundle. Thus for any p in B, the preimage π−1 is homeomorphic to F and is called the fiber over p, every fiber bundle π, E → B is an open map, since projections of products are open maps. Therefore B carries the quotient topology determined by the map π, a fiber bundle is often denoted that, in analogy with a short exact sequence, indicates which space is the fiber, total space and base space, as well as the map from total to base space. A smooth fiber bundle is a bundle in the category of smooth manifolds. That is, E, B, and F are required to be smooth manifolds, let E = B × F and let π, E → B be the projection onto the first factor. Then E is a fiber bundle over B, here E is not just locally a product but globally one. Any such fiber bundle is called a trivial bundle, any fiber bundle over a contractible CW-complex is trivial
12.
Tangent bundle
–
In differential geometry, the tangent bundle of a differentiable manifold M is a manifold T M, which assembles all the tangent vectors in M. As a set, it is given by the disjoint union of the tangent spaces of M and that is, T M = ⨆ x ∈ M T x M = ⋃ x ∈ M × T x M = ⋃ x ∈ M =. Where T x M denotes the tangent space to M at the point x, so, an element of T M can be thought of\as a pair, where x is a point in M and v is a tangent vector to M at x. There is a natural projection π, T M ↠ M defined by π = x and this projection maps each tangent space T x M to the single point x. The tangent bundle comes equipped with a natural topology, with this topology, the tangent bundle to a manifold is the prototypical example of a vector bundle. A section of T M is a field on M, and the dual bundle to T M is the cotangent bundle. By definition, a manifold M is parallelizable if and only if the tangent bundle is trivial. By definition, a manifold M is framed if and only if the tangent bundle TM is stably trivial, for example, the n-dimensional sphere Sn is framed for all n, but parallelizable only for n=1,3,7. One of the roles of the tangent bundle is to provide a domain. Namely, if f, M → N is a function, with M and N smooth manifolds, its derivative is a smooth function Df. The tangent bundle comes equipped with a topology and smooth structure so as to make it into a manifold in its own right. The dimension of TM is twice the dimension of M, each tangent space of an n-dimensional manifold is an n-dimensional vector space. If U is an open subset of M, then there is a diffeomorphism from TU to U × Rn which restricts to a linear isomorphism from each tangent space TxU to × Rn. As a manifold, however, TM is not always diffeomorphic to the product manifold M × Rn, when it is of the form M × Rn, then the tangent bundle is said to be trivial. Trivial tangent bundles usually occur for manifolds equipped with a group structure, for instance. The tangent bundle of the circle is trivial because it is a Lie group. It is not true however that all spaces with trivial tangent bundles are Lie groups, just as manifolds are locally modelled on Euclidean space, tangent bundles are locally modelled on U × Rn, where U is an open subset of Euclidean space. If M is a smooth manifold, then it comes equipped with an atlas of charts where Uα is an open set in M and ϕ α, U α → R n is a diffeomorphism
13.
Covariance and contravariance of vectors
–
In multilinear algebra and tensor analysis, covariance and contravariance describe how the quantitative description of certain geometric or physical entities changes with a change of basis. In physics, a basis is sometimes thought of as a set of reference axes, a change of scale on the reference axes corresponds to a change of units in the problem. For instance, in changing scale from meters to centimeters, the components of a velocity vector will multiply by 100. Vectors exhibit this behavior of changing scale inversely to changes in scale to the reference axes, as a result, vectors often have units of distance or distance times some other unit. In contrast, dual vectors typically have units the inverse of distance or the inverse of distance times some other unit, an example of a dual vector is the gradient, which has units of a spatial derivative, or distance−1. The components of dual vectors change in the way as changes to scale of the reference axes. That is, the matrix that transforms the vector of components must be the inverse of the matrix that transforms the basis vectors, the components of vectors are said to be contravariant. In Einstein notation, contravariant components are denoted with upper indices as in v = v i e i, for a dual vector to be basis-independent, the components of the dual vector must co-vary with a change of basis to remain representing the same covector. That is, the components must be transformed by the matrix as the change of basis matrix. The components of vectors are said to be covariant. Examples of covariant vectors generally appear when taking a gradient of a function, in Einstein notation, covariant components are denoted with lower indices as in v = v i e i. Curvilinear coordinate systems, such as cylindrical or spherical coordinates, are used in physical. Tensors are objects in multilinear algebra that can have aspects of both covariance and contravariance, in physics, a vector typically arises as the outcome of a measurement or series of measurements, and is represented as a list of numbers such as. The numbers in the list depend on the choice of coordinate system, for a vector to represent a geometric object, it must be possible to describe how it looks in any other coordinate system. That is to say, the components of the vectors will transform in a way in passing from one coordinate system to another. A contravariant vector has components that transform as the coordinates do under changes of coordinates, including rotation and dilation. The vector itself does not change under these operations, instead, in other words, if the reference axes were rotated in one direction, the component representation of the vector would rotate in exactly the opposite way. Similarly, if the reference axes were stretched in one direction and this important requirement is what distinguishes a contravariant vector from any other triple of physically meaningful quantities
14.
Mean
–
In mathematics, mean has several different definitions depending on the context. An analogous formula applies to the case of a probability distribution. Not every probability distribution has a mean, see the Cauchy distribution for an example. Moreover, for some distributions the mean is infinite, for example, the arithmetic mean of a set of numbers x1, x2. Xn is typically denoted by x ¯, pronounced x bar, if the data set were based on a series of observations obtained by sampling from a statistical population, the arithmetic mean is termed the sample mean to distinguish it from the population mean. For a finite population, the mean of a property is equal to the arithmetic mean of the given property while considering every member of the population. For example, the mean height is equal to the sum of the heights of every individual divided by the total number of individuals. The sample mean may differ from the mean, especially for small samples. The law of large numbers dictates that the larger the size of the sample, outside of probability and statistics, a wide range of other notions of mean are often used in geometry and analysis, examples are given below. The geometric mean is an average that is useful for sets of numbers that are interpreted according to their product. X ¯ =1 n For example, the mean of five values,4,36,45,50,75 is,1 /5 =243000005 =30. The harmonic mean is an average which is useful for sets of numbers which are defined in relation to some unit, for example speed. AM, GM, and HM satisfy these inequalities, A M ≥ G M ≥ H M Equality holds if, in descriptive statistics, the mean may be confused with the median, mode or mid-range, as any of these may be called an average. The mean of a set of observations is the average of the values, however, for skewed distributions. For example, mean income is typically skewed upwards by a number of people with very large incomes. By contrast, the income is the level at which half the population is below. The mode income is the most likely income, and favors the larger number of people with lower incomes, the mean of a probability distribution is the long-run arithmetic average value of a random variable having that distribution. In this context, it is known as the expected value
15.
Sampling (signal processing)
–
In signal processing, sampling is the reduction of a continuous-time signal to a discrete-time signal. A common example is the conversion of a wave to a sequence of samples. A sample is a value or set of values at a point in time and/or space, a sampler is a subsystem or operation that extracts samples from a continuous signal. A theoretical ideal sampler produces samples equivalent to the value of the continuous signal at the desired points. Sampling can be done for functions varying in space, time, or any other dimension, then the sampled function is given by the sequence, s, for integer values of n. The sampling frequency or sampling rate, fs, is the number of samples obtained in one second. Reconstructing a continuous function from samples is done by interpolation algorithms, the Whittaker–Shannon interpolation formula is mathematically equivalent to an ideal lowpass filter whose input is a sequence of Dirac delta functions that are modulated by the sample values. When the time interval between adjacent samples is a constant, the sequence of functions is called a Dirac comb. Mathematically, the modulated Dirac comb is equivalent to the product of the function with s. That purely mathematical abstraction is sometimes referred to as impulse sampling, most sampled signals are not simply stored and reconstructed. But the fidelity of a reconstruction is a customary measure of the effectiveness of sampling. That fidelity is reduced when s contains frequency components whose periodicity is smaller than 2 samples, the quantity ½ cycles/sample × fs samples/sec = fs/2 cycles/sec is known as the Nyquist frequency of the sampler. Therefore, s is usually the output of a lowpass filter, without an anti-aliasing filter, frequencies higher than the Nyquist frequency will influence the samples in a way that is misinterpreted by the interpolation process. In practice, the signal is sampled using an analog-to-digital converter. This results in deviations from the theoretically perfect reconstruction, collectively referred to as distortion, various types of distortion can occur, including, Aliasing. Some amount of aliasing is inevitable because only theoretical, infinitely long, Aliasing can be made arbitrarily small by using a sufficiently large order of the anti-aliasing filter. Aperture error results from the fact that the sample is obtained as a time average within a sampling region, in a capacitor-based sample and hold circuit, aperture error is introduced because the capacitor cannot instantly change voltage thus requiring the sample to have non-zero width. Jitter or deviation from the precise sample timing intervals, noise, including thermal sensor noise, analog circuit noise, etc
16.
Net present value
–
Incoming and outgoing cash flows can also be described as benefit and cost cash flows, respectively. Time value of money dictates that time affects the value of cash flows and this decrease in the current value of future cash flows is based on the market dictated rate of return. More technically, cash flows of nominal equal value over a series result in different effective value cash flows that make future cash flows less valuable over time. If for example there exists a series of identical cash flows. A cash flow today is more valuable than a cash flow in the future because a present flow can be invested immediately and begin earning returns. Net present value is determined by calculating the costs and benefits for each period of an investment, the period is typically one year, but could be measured in quarter-years, half-years or months. After the cash flow for each period is calculated, the present value of one is achieved by discounting its future value at a periodic rate of return. NPV is the sum of all the future cash flows. Because of its simplicity, NPV is a tool to determine whether a project or investment will result in a net profit or a loss. A positive NPV results in profit, while a negative NPV results in a loss, the NPV measures the excess or shortfall of cash flows, in present value terms, above the cost of funds. In a theoretical situation of unlimited capital budgeting a company should pursue every investment with a positive NPV. However, in terms a companys capital constraints limit investments to projects with the highest NPV whose cost cash flows, or initial cash investment. NPV is a tool in discounted cash flow analysis and is a standard method for using the time value of money to appraise long-term projects. It is widely used throughout economics, finance, and accounting, in the case when all future cash flows are positive, or incoming the only outflow of cash is the purchase price, the NPV is simply the PV of future cash flows minus the purchase price. NPV can be described as the amount between the sums of discounted cash inflows and cash outflows. It compares the present value of money today to the present value of money in the future, taking inflation, the NPV of a sequence of cash flows takes as input the cash flows and a discount rate or discount curve and outputs a price. The converse process in DCF analysis—taking a sequence of cash flows, each cash inflow/outflow is discounted back to its present value. For educational purposes, R0 is commonly placed to the left of the sum to emphasize its role as the investment
17.
Cash flow
–
Cash flows are narrowly interconnected with the concepts of value, interest rate and liquidity. A cash flow that shall happen on a future day tN can be transformed into a flow of the same value in t0. Cash flows are often transformed into measures that give information e. g. on a value and situation. The time of cash flows into and out of projects are used as inputs in financial models such as rate of return. To determine problems with a businesss liquidity, being profitable does not necessarily mean being liquid. A company can fail because of a shortage of cash even while profitable, as an alternative measure of a businesss profits when it is believed that accrual accounting concepts do not represent economic realities. For instance, a company may be profitable but generating little operational cash. In such a case, the company may be deriving additional operating cash by issuing shares or raising additional debt finance, cash flow can be used to evaluate the quality of income generated by accrual accounting. When net income is composed of large non-cash items it is considered low quality, to evaluate the risks within a financial product, e. g. matching cash requirements, evaluating default risk, re-investment requirements, etc. Cash flow notion is based loosely on cash flow statement accounting standards, the term is flexible and can refer to time intervals spanning over past-future. It can refer to the total of all flows involved or a subset of those flows, subset terms include net cash flow, operating cash flow and free cash flow. The net cash flow of a company over a period is equal to the change in balance over this period, positive if the cash balance increases. So how to calculate operating cash flow of a project, oCF=incremental earnings+depreciation=+depreciation=earning before interest and tax*+ depreciation= * +depreciation= * + depreciation* tax. By the way, depreciation*tax which locates at the end of the formula is called depreciation shield through which we can see there is a negative relation between depreciation and cash flow. It is the cost or revenue related to the companys short-term asset like inventory and this is the cost or gain related to the companys fix asset such as the cash used to buy a new equipment or the cash which is gained from selling an old equipment. The sum of the three component above will be the flow for a project. The sum of the three components above will be the cash flow of a company. The net cash flow only provides an amount of information
18.
Discount window
–
The term originated with the practice of sending a bank representative to a reserve bank teller window when a bank needed to borrow money. The interest rate charged on loans by a central bank is called the discount rate, base rate, or repo rate. It is also not the thing as the federal funds rate or its equivalents in other currencies. In recent years, the discount rate has been approximately a point above the federal funds rate. Because of this, it is a relatively unimportant factor in the control of the supply and is only taken advantage of at large volume during emergencies. In the United States, there are several different rates charged to institutions borrowing at the Discount Window. In 2006, these were, the primary credit rate, the secondary rate. The Federal Reserve does not publish information regarding institutions eligibility for primary or secondary credit, primary and secondary credit is normally offered on a secured overnight basis, while seasonal credit is extended up to nine months. The primary credit is normally set 100 basis points above the federal funds target, the seasonal credit rate is set from an averaging of the effective fed funds rate and 90-day certificate of deposit rates. Institutions must provide acceptable collateral to secure the loan, the flood of funds released into the banking system reduced the immediate need for banks to rely on payments from other banks to make the payments they themselves owed others. This kept liquidity alive in the economy despite interruptions of communications, on August 17,2007, the Board of Governors of the Federal Reserve announced a temporary change to primary credit lending terms. The discount rate was cut by 50 bp—to 5. 75% from 6. 25%—and the term of loans was extended overnight to up to thirty days. This reduced the spread of the primary credit rate over the fed funds rate from 100 basis points to 50 basis points, the maximum term of loans was extended from thirty days to ninety days. Less than a year before the term was only overnight, the primary credit rate was also reduced to 3. 25% from 3. 50%, which cut the spread of the primary credit rate over the fed funds rate to 25 basis points from 50 basis points. With the bankruptcy of Lehman Brothers again the volume of borrowing requests increased dramatically, banks lend not directly to each other, but to the central bank and, on the other side, borrow not directly from each other, but from the central bank. In the eurozone the discount window is called Standing Facilities, which are used to manage overnight liquidity, qualifying counterparties can use the Standing Facilities to increase the amount of cash they have available for overnight settlements using the Marginal Lending Facility. Conversely, excess funds can be deposited within the European Central Bank System, counterparties must have collateral for the funds they receive from the Marginal Lending Facility and will be charged the overnight rate set by the ECBS. Excess capital can be deposited with the Deposit facility and they will earn interest at the rate offered by the ECBS
19.
Atan2
–
In a variety of computer languages, the function atan2 is the arctangent function with two arguments. For any real number x and y not both equal to zero, atan2 is the angle in radians between the positive x-axis of a plane and the point given by the coordinates on it. The angle is positive for angles, and negative for clockwise angles. The purpose of using two arguments instead of one, i. e and it also avoids the problem of division by zero, as atan2 will return a valid answer as long as y is non-zero. The atan2 function was first introduced in computer programming languages, but now it is common in other fields of science. It dates back at least as far as the FORTRAN programming language and is found in many modern programming languages. Among these languages are, Cs math. h standard library, the Java Math library. NETs System. Math, the Python math module, the Ruby Math module, in addition, many scripting languages, such as Perl, include the C-style atan2 function. The one-argument arctangent function cannot distinguish between diametrically opposite directions, for example, the anticlockwise angle from the x-axis to the vector, calculated in the usual way as arctan, is π/4, or 45°. However, the angle between the x-axis and the vector appears, by the method, to be arctan, again π/4, even though the answer clearly should be −3π/4. In addition, an attempt to find the angle between the x-axis and the vector requires evaluation of arctan, which fails on division by zero, the atan2 function calculates the arc tangent of the two variables y and x. It is similar to calculating the arc tangent of y/x, except that the signs of both arguments are used to determine the quadrant of the result. Thus, the function takes into account the signs of both vector components, and places the angle in the correct quadrant, e. g. atan2 = π/4. In addition, atan2 can produce an angle of ±π/2 while the ordinary arctangent method breaks down, the atan2 function is useful in many applications involving vectors in Euclidean space, such as finding the direction from one point to another. A principal use is in computer graphics rotations, for converting rotation matrix representations into Euler angles, the function atan2 computes the principal value of the argument function applied to the complex number x+iy. That is, atan2 = Pr arg = Arg, the argument can be changed by 2π without making any difference to the angle, but to define atan2 uniquely one uses the principal value in the range (−π, π]. On implementations without signed zero, or when given positive zero arguments and it will always return a value in the range rather than raising an error or returning a NaN. In Common Lisp, where optional arguments exist, the atan function allows one to supply the x coordinate. In Mathematica, the form ArcTan is used where the one parameter form supplies the normal arctangent, Mathematica classifies ArcTan as an indeterminate expression
20.
Winding number
–
The term winding number may also refer to the rotation number of an iterated map. In mathematics, the number of a closed curve in the plane around a given point is an integer representing the total number of times that curve travels counterclockwise around the point. The winding number depends on the orientation of the curve, and is if the curve travels around the point clockwise. Suppose we are given a closed, oriented curve in the xy plane and we can imagine the curve as the path of motion of some object, with the orientation indicating the direction in which the object moves. Then the winding number of the curve is equal to the number of counterclockwise turns that the object makes around the origin. When counting the number of turns, counterclockwise motion counts as positive. For example, if the object first circles the four times counterclockwise. Using this scheme, a curve that does not travel around the origin at all has winding number zero, therefore, the winding number of a curve may be any integer. The following pictures show curves with winding numbers between −2 and 3, A curve in the xy plane can be defined by parametric equations, x = x and y = y for 0 ≤ t ≤1. If we think of the t as time, then these equations specify the motion of an object in the plane between t =0 and t =1. The path of motion is a curve as long as the functions x and y are continuous. This curve is closed as long as the position of the object is the same at t =0 and t =1 and we can define the winding number of such a curve using the polar coordinate system. Assuming the curve does not pass through the origin, we can rewrite the parametric equations in polar form, the functions r and θ are required to be continuous, with r >0. Because the initial and final positions are the same, θ and θ must differ by a multiple of 2π. This integer is the number, winding number = θ − θ2 π. This defines the number of a curve around the origin in the xy plane. By translating the coordinate system, we can extend this definition to include winding numbers around any point p. Winding number is defined in different ways in various parts of mathematics. Any curve partitions the plane into several connected regions, one of which is unbounded, the winding numbers of the curve around two points in the same region are equal
21.
Closed differential form
–
Thus, an exact form is in the image of d, and a closed form is in the kernel of d. For an exact form α, α = dβ for some differential form β of one-lesser degree than α, the form β is called a potential form or primitive for α. Since d2 =0, β is not unique, but can be modified by the addition of the differential of a two-step-lower-order form, because d2 =0, any exact form is automatically closed. The question of whether every closed form is exact depends on the topology of the domain of interest, on a contractible domain, every closed form is exact by the Poincaré lemma. Explicitly, the form is given as, d θ =1 x 2 + y 2 and this can be computed from a formula for the argument, most simply via arctan, recognizing 1/ as corresponding to the derivative of arctan, which is 1/. Differential forms in R2 and R3 were well known in the physics of the nineteenth century. In the plane, 0-forms are just functions, and 2-forms are functions times the area element dx∧dy. The formula for the derivative d here is d α = d x ∧ d y where the subscripts denote partial derivatives. Therefore the condition for α to be closed is f y = g x, in this case if h is a function then d h = h x d x + h y d y. The implication from exact to closed is then a consequence of the symmetry of second derivatives, on a Riemannian manifold, or more generally a pseudo-Riemannian manifold, k-forms correspond to k-vector fields, so there is a notion of a vector field corresponding to a closed or exact form. In 3 dimensions, a vector field is called a conservative vector field, meaning that it is the derivative of a 0-form. A closed vector field is one whose derivative vanishes, and is called a vector field. Thinking of a field as a 2-form instead, a closed vector field is one whose derivative vanishes. The Poincaré lemma states that if B is a ball in Rn, any smooth closed p-form ω defined on B is exact. Translating if necessary, it can be assumed that the ball B has centre 0, let αs be the flow on Rn defined by αsx = e−sx. For s ≤0 it carries B into itself and induces an action on functions, the derivative of the flow is the vector field X defined on functions f by Xf = d/ds|s =0, it is the radial vector field r∂/∂r = ∑ xi ∂/∂xi. Often it is convenient to write the flow multiplicatively as a function of t = es, setting βt = αs, only if 0 < t ≤1 will βt carry B into itself. The derivative of the flow on forms defines the Lie derivative with respect to X given by LX ω = d /ds|s=0
22.
Exact differential form
–
Thus, an exact form is in the image of d, and a closed form is in the kernel of d. For an exact form α, α = dβ for some differential form β of one-lesser degree than α, the form β is called a potential form or primitive for α. Since d2 =0, β is not unique, but can be modified by the addition of the differential of a two-step-lower-order form, because d2 =0, any exact form is automatically closed. The question of whether every closed form is exact depends on the topology of the domain of interest, on a contractible domain, every closed form is exact by the Poincaré lemma. Explicitly, the form is given as, d θ =1 x 2 + y 2 and this can be computed from a formula for the argument, most simply via arctan, recognizing 1/ as corresponding to the derivative of arctan, which is 1/. Differential forms in R2 and R3 were well known in the physics of the nineteenth century. In the plane, 0-forms are just functions, and 2-forms are functions times the area element dx∧dy. The formula for the derivative d here is d α = d x ∧ d y where the subscripts denote partial derivatives. Therefore the condition for α to be closed is f y = g x, in this case if h is a function then d h = h x d x + h y d y. The implication from exact to closed is then a consequence of the symmetry of second derivatives, on a Riemannian manifold, or more generally a pseudo-Riemannian manifold, k-forms correspond to k-vector fields, so there is a notion of a vector field corresponding to a closed or exact form. In 3 dimensions, a vector field is called a conservative vector field, meaning that it is the derivative of a 0-form. A closed vector field is one whose derivative vanishes, and is called a vector field. Thinking of a field as a 2-form instead, a closed vector field is one whose derivative vanishes. The Poincaré lemma states that if B is a ball in Rn, any smooth closed p-form ω defined on B is exact. Translating if necessary, it can be assumed that the ball B has centre 0, let αs be the flow on Rn defined by αsx = e−sx. For s ≤0 it carries B into itself and induces an action on functions, the derivative of the flow is the vector field X defined on functions f by Xf = d/ds|s =0, it is the radial vector field r∂/∂r = ∑ xi ∂/∂xi. Often it is convenient to write the flow multiplicatively as a function of t = es, setting βt = αs, only if 0 < t ≤1 will βt carry B into itself. The derivative of the flow on forms defines the Lie derivative with respect to X given by LX ω = d /ds|s=0
23.
De Rham cohomology
–
It is a cohomology theory based on the existence of differential forms with prescribed properties. The de Rham complex is the complex of exterior differential forms on some smooth manifold M. 0 → Ω0 → d Ω1 → d Ω2 → d Ω3 → ⋯ where Ω0 is the space of functions on M, Ω1 is the space of 1-forms. The converse, however, is not in true, closed forms need not be exact. A simple but significant case is the 1-form of angle measure on the unit circle and we can, however, change the topology by removing just one point. The idea of de Rham cohomology is to classify the different types of closed forms on a manifold. One performs this classification by saying that two closed forms α, β ∈ Ωk are cohomologous if they differ by an exact form and this classification induces an equivalence relation on the space of closed forms in Ωk. One then defines the k-th de Rham cohomology group H d R k to be the set of classes, that is. Note that, for any manifold M with n connected components H d R0 ≅ R n and this follows from the fact that any smooth function on M with zero derivative is constant on each of the connected components of M. One may often find the general de Rham cohomologies of a manifold using the fact about the zero cohomology. Another useful fact is that the de Rham cohomology is a homotopy invariant, let n >0, m ≥0, and I an open real interval. Then H d R k ≃ { R if k =0, n,0 if k ≠0, n, similarly, allowing n >0 here, we obtain H d R k ≃ R. Punctured Euclidean space is simply Euclidean space with the origin removed. Stokes theorem is an expression of duality between de Rham cohomology and the homology of chains and it says that the pairing of differential forms and chains, via integration, gives a homomorphism from de Rham cohomology H d R k to singular cohomology groups H k. De Rhams theorem, proved by Georges de Rham in 1931, states that for a smooth manifold M, the theorem of de Rham asserts that this is an isomorphism between de Rham cohomology and singular cohomology. The wedge product endows the direct sum of groups with a ring structure. A further result of the theorem is that the two rings are isomorphic, where the analogous product on singular cohomology is the cup product. Let Ωk denote the sheaf of germs of k-forms on M, by the Poincaré lemma, the following sequence of sheaves is exact,0 → R → Ω0 → d Ω1 → d Ω2 → d ⋯ → d Ω m →0. This sequence now breaks up into short exact sequences 0 → d Ω k −1 → ⊂ Ω k → d d Ω k →0, each of these induces a long exact sequence in cohomology
24.
Differential of a function
–
In calculus, the differential represents the principal part of the change in a function y = f with respect to changes in the independent variable. The differential dy is defined by d y = f ′ d x, where f ′ is the derivative of f with respect to x, one also writes d f = f ′ d x. The precise meaning of the variables dy and dx depends on the context of the application, traditionally, the variables dx and dy are considered to be very small, and this interpretation is made rigorous in non-standard analysis. The quotient dy/dx is not infinitely small, rather it is a real number, the use of infinitesimals in this form was widely criticized, for instance by the famous pamphlet The Analyst by Bishop Berkeley. Augustin-Louis Cauchy defined the differential without appeal to the atomism of Leibnizs infinitesimals, in physical treatments, such as those applied to the theory of thermodynamics, the infinitesimal view still prevails. Courant & John reconcile the use of infinitesimal differentials with the mathematical impossibility of them as follows. The differentials represent finite non-zero values that are smaller than the degree of accuracy required for the purpose for which they are intended. Thus physical infinitesimals need not appeal to a corresponding mathematical infinitesimal in order to have a precise sense, following twentieth-century developments in mathematical analysis and differential geometry, it became clear that the notion of the differential of a function could be extended in a variety of ways. In real analysis, it is desirable to deal directly with the differential as the principal part of the increment of a function. This leads directly to the notion that the differential of a function at a point is a functional of an increment Δx. This approach allows the differential to be developed for a variety of more sophisticated spaces, in non-standard calculus, differentials are regarded as infinitesimals, which can themselves be put on a rigorous footing. The differential is defined in modern treatments of calculus as follows. The differential of a function f of a real variable x is the function df of two independent real variables x and Δx given by d f = d e f f ′ Δ x. One or both of the arguments may be suppressed, i. e. one may see df or simply df, if y = f, the differential may also be written as dy. The partial differential is therefore ∂ y ∂ x 1 d x 1 involving the partial derivative of y with respect to x1. The total differential is then defined as d y = ∂ y ∂ x 1 Δ x 1 + ⋯ + ∂ y ∂ x n Δ x n. Since, with this definition, d x i = Δ x i, in measurement, the total differential is used in estimating the error Δf of a function f based on the errors Δx, Δy. of the parameters x, y. As they are assumed to be independent, the analysis describes the worst-case scenario, the absolute values of the component errors are used, because after simple computation, the derivative may have a negative sign
25.
Open set
–
In topology, an open set is an abstract concept generalizing the idea of an open interval in the real line. These conditions are very loose, and they allow enormous flexibility in the choice of open sets, in the two extremes, every set can be open, or no set can be open but the space itself and the empty set. In practice, however, open sets are usually chosen to be similar to the intervals of the real line. The notion of an open set provides a way to speak of nearness of points in a topological space. Once a choice of open sets is made, the properties of continuity, connectedness, and compactness, each choice of open sets for a space is called a topology. Although open sets and the topologies that they comprise are of importance in point-set topology. Intuitively, an open set provides a method to distinguish two points, for example, if about one point in a topological space there exists an open set not containing another point, the two points are referred to as topologically distinguishable. In this manner, one may speak of two subsets of a topological space are near without concretely defining a metric on the topological space. Therefore, topological spaces may be seen as a generalization of metric spaces, in the set of all real numbers, one has the natural Euclidean metric, that is, a function which measures the distance between two real numbers, d = |x - y|. Therefore, given a number, one can speak of the set of all points close to that real number. In essence, points within ε of x approximate x to an accuracy of degree ε, note that ε >0 always but as ε becomes smaller and smaller, one obtains points that approximate x to a higher and higher degree of accuracy. For example, if x =0 and ε =1, the points within ε of x are precisely the points of the interval, that is, however, with ε =0.5, the points within ε of x are precisely the points of. Clearly, these points approximate x to a degree of accuracy compared to when ε =1. The previous discussion shows, for the case x =0, in particular, sets of the form give us a lot of information about points close to x =0. Thus, rather than speaking of a concrete Euclidean metric, one may use sets to describe points close to x, thus, we find that in some sense, every real number is distance 0 away from 0. It may help in case to think of the measure as being a binary condition, all things in R are equally close to 0. In general, one refers to the family of sets containing 0, used to approximate 0, as a neighborhood basis, in fact, one may generalize these notions to an arbitrary set, rather than just the real numbers. In this case, given a point of that set, one may define a collection of sets around x, of course, this collection would have to satisfy certain properties for otherwise we may not have a well-defined method to measure distance
26.
Differentiable function
–
In calculus, a differentiable function of one real variable is a function whose derivative exists at each point in its domain. As a result, the graph of a function must have a tangent line at each point in its domain, be relatively smooth. More generally, if x0 is a point in the domain of a function f and this means that the graph of f has a non-vertical tangent line at the point. The function f may also be called locally linear at x0, if f is differentiable at a point x0, then f must also be continuous at x0. In particular, any function must be continuous at every point in its domain. The converse does not hold, a function need not be differentiable. For example, a function with a bend, cusp, or vertical tangent may be continuous, most functions that occur in practice have derivatives at all points or at almost every point. However, a result of Stefan Banach states that the set of functions that have a derivative at some point is a set in the space of all continuous functions. Informally, this means that differentiable functions are very atypical among continuous functions, the first known example of a function that is continuous everywhere but differentiable nowhere is the Weierstrass function. A function f is said to be continuously differentiable if the derivative f exists and is itself a continuous function, though the derivative of a differentiable function never has a jump discontinuity, it is possible for the derivative to have an essential discontinuity. For example, the function f = { x 2 sin if x ≠00 if x =0 is differentiable at 0, since f ′ = lim ϵ →0 =0, exists. However, for x≠0, f ′ =2 x sin − cos which has no limit as x →0, nevertheless, Darbouxs theorem implies that the derivative of any function satisfies the conclusion of the intermediate value theorem. Sometimes continuously differentiable functions are said to be of class C1, a function is of class C2 if the first and second derivative of the function both exist and are continuous. More generally, a function is said to be of class Ck if the first k derivatives f′, F all exist and are continuous. If derivatives f exist for all integers n, the function is smooth or equivalently. If all the derivatives of a function exist and are continuous in a neighborhood of a point, then the function is differentiable at that point. If a function is differentiable at x0, then all of the partial derivatives exist at x0, a similar formulation of the higher-dimensional derivative is provided by the fundamental increment lemma found in single-variable calculus. Note that existence of the partial derivatives does not in general guarantee that a function is differentiable at a point
27.
Function (mathematics)
–
In mathematics, a function is a relation between a set of inputs and a set of permissible outputs with the property that each input is related to exactly one output. An example is the function that each real number x to its square x2. The output of a function f corresponding to a x is denoted by f. In this example, if the input is −3, then the output is 9, likewise, if the input is 3, then the output is also 9, and we may write f =9. The input variable are sometimes referred to as the argument of the function, Functions of various kinds are the central objects of investigation in most fields of modern mathematics. There are many ways to describe or represent a function, some functions may be defined by a formula or algorithm that tells how to compute the output for a given input. Others are given by a picture, called the graph of the function, in science, functions are sometimes defined by a table that gives the outputs for selected inputs. A function could be described implicitly, for example as the inverse to another function or as a solution of a differential equation, sometimes the codomain is called the functions range, but more commonly the word range is used to mean, instead, specifically the set of outputs. For example, we could define a function using the rule f = x2 by saying that the domain and codomain are the numbers. The image of this function is the set of real numbers. In analogy with arithmetic, it is possible to define addition, subtraction, multiplication, another important operation defined on functions is function composition, where the output from one function becomes the input to another function. Linking each shape to its color is a function from X to Y, each shape is linked to a color, there is no shape that lacks a color and no shape that has more than one color. This function will be referred to as the color-of-the-shape function, the input to a function is called the argument and the output is called the value. The set of all permitted inputs to a function is called the domain of the function. Thus, the domain of the function is the set of the four shapes. The concept of a function does not require that every possible output is the value of some argument, a second example of a function is the following, the domain is chosen to be the set of natural numbers, and the codomain is the set of integers. The function associates to any number n the number 4−n. For example, to 1 it associates 3 and to 10 it associates −6, a third example of a function has the set of polygons as domain and the set of natural numbers as codomain
28.
Derivative
–
The derivative of a function of a real variable measures the sensitivity to change of the function value with respect to a change in its argument. Derivatives are a tool of calculus. For example, the derivative of the position of an object with respect to time is the objects velocity. The derivative of a function of a variable at a chosen input value. The tangent line is the best linear approximation of the function near that input value, for this reason, the derivative is often described as the instantaneous rate of change, the ratio of the instantaneous change in the dependent variable to that of the independent variable. Derivatives may be generalized to functions of real variables. In this generalization, the derivative is reinterpreted as a transformation whose graph is the best linear approximation to the graph of the original function. The Jacobian matrix is the matrix that represents this linear transformation with respect to the basis given by the choice of independent and dependent variables and it can be calculated in terms of the partial derivatives with respect to the independent variables. For a real-valued function of variables, the Jacobian matrix reduces to the gradient vector. The process of finding a derivative is called differentiation, the reverse process is called antidifferentiation. The fundamental theorem of calculus states that antidifferentiation is the same as integration, differentiation and integration constitute the two fundamental operations in single-variable calculus. Differentiation is the action of computing a derivative, the derivative of a function y = f of a variable x is a measure of the rate at which the value y of the function changes with respect to the change of the variable x. It is called the derivative of f with respect to x, If x and y are real numbers, and if the graph of f is plotted against x, the derivative is the slope of this graph at each point. The simplest case, apart from the case of a constant function, is when y is a linear function of x. This formula is true because y + Δ y = f = m + b = m x + m Δ x + b = y + m Δ x. Thus, since y + Δ y = y + m Δ x and this gives an exact value for the slope of a line. If the function f is not linear, however, then the change in y divided by the change in x varies, differentiation is a method to find an exact value for this rate of change at any given value of x. The idea, illustrated by Figures 1 to 3, is to compute the rate of change as the value of the ratio of the differences Δy / Δx as Δx becomes infinitely small
29.
Linear map
–
In mathematics, a linear map is a mapping V → W between two modules that preserves the operations of addition and scalar multiplication. An important special case is when V = W, in case the map is called a linear operator, or an endomorphism of V. Sometimes the term linear function has the meaning as linear map. A linear map always maps linear subspaces onto linear subspaces, for instance it maps a plane through the origin to a plane, Linear maps can often be represented as matrices, and simple examples include rotation and reflection linear transformations. In the language of algebra, a linear map is a module homomorphism. In the language of category theory it is a morphism in the category of modules over a given ring, let V and W be vector spaces over the same field K. e. that for any vectors x1. Am ∈ K, the equality holds, f = a 1 f + ⋯ + a m f. It is then necessary to specify which of these fields is being used in the definition of linear. If V and W are considered as spaces over the field K as above, for example, the conjugation of complex numbers is an R-linear map C → C, but it is not C-linear. A linear map from V to K is called a linear functional and these statements generalize to any left-module RM over a ring R without modification, and to any right-module upon reversing of the scalar multiplication. The zero map between two left-modules over the ring is always linear. The identity map on any module is a linear operator, any homothecy centered in the origin of a vector space, v ↦ c v where c is a scalar, is a linear operator. This does not hold in general for modules, where such a map might only be semilinear, for real numbers, the map x ↦ x2 is not linear. Conversely, any map between finite-dimensional vector spaces can be represented in this manner, see the following section. Differentiation defines a map from the space of all differentiable functions to the space of all functions. It also defines an operator on the space of all smooth functions. If V and W are finite-dimensional vector spaces over a field F, then functions that send linear maps f, V → W to dimF × dimF matrices in the way described in the sequel are themselves linear maps. The expected value of a variable is linear, as for random variables X and Y we have E = E + E and E = aE
30.
Reciprocal lattice
–
In physics, the reciprocal lattice represents the Fourier transform of another lattice. In normal usage, this first lattice is usually a periodic spatial function in real-space and is known as the direct lattice. The reciprocal lattice plays a role in most analytic studies of periodic structures. In neutron and X-ray diffraction due to the Laue conditions the momentum difference between incoming and diffracted X-rays of a crystal is a lattice vector. The diffraction pattern of a crystal can be used to determine the reciprocal vectors of the lattice, using this process, one can infer the atomic arrangement of a crystal. The Brillouin zone is a Wigner-Seitz cell of the reciprocal lattice, assuming an ideal Bravais lattice R n = n 1 ⋅ a 1 + n 2 ⋅ a 2 where n 1, n 2 ∈ Z. Any quantity, e. g. the electronic density in a crystal can be written as a periodic function f = f Due to the periodicity it is useful to write it in Fourier expansions. Mathematically, we can describe the lattice as the set of all vectors G m that satisfy the above identity for all lattice point position vectors R. This reciprocal lattice is itself a Bravais lattice, and the reciprocal of the lattice is the original lattice. Using column vector representation of vectors, the formulae above can be rewritten using matrix inversion. This method appeals to the definition, and allows generalization to arbitrary dimensions, the cross product formula dominates introductory materials on crystallography. The above definition is called the physics definition, as the factor of 2 π comes naturally from the study of periodic structures. The crystallographers definition has the advantage that the definition of b 1 is just the reciprocal magnitude of a 1 in the direction of a 2 × a 3 and this can simplify certain mathematical manipulations, and expresses reciprocal lattice dimensions in units of spatial frequency. It is a matter of taste which definition of the lattice is used, each point in the reciprocal lattice corresponds to a set of lattice planes in the real space lattice. The direction of the lattice vector corresponds to the normal to the real space planes. The magnitude of the lattice vector is given in reciprocal length and is equal to the reciprocal of the interplanar spacing of the real space planes. Reciprocal lattices for the crystal system are as follows. The simple cubic Bravais lattice, with cubic primitive cell of side a, has for its reciprocal a simple cubic lattice with a primitive cell of side 2 π a
31.
Tensor
–
In mathematics, tensors are geometric objects that describe linear relations between geometric vectors, scalars, and other tensors. Elementary examples of such include the dot product, the cross product. Geometric vectors, often used in physics and engineering applications, given a coordinate basis or fixed frame of reference, a tensor can be represented as an organized multidimensional array of numerical values. The order of a tensor is the dimensionality of the array needed to represent it, or equivalently, for example, a linear map is represented by a matrix in a basis, and therefore is a 2nd-order tensor. A vector is represented as a 1-dimensional array in a basis, scalars are single numbers and are thus 0th-order tensors. Because they express a relationship between vectors, tensors themselves must be independent of a choice of coordinate system. The precise form of the transformation law determines the type of the tensor, the tensor type is a pair of natural numbers, where n is the number of contravariant indices and m is the number of covariant indices. The total order of a tensor is the sum of two numbers. The concept enabled an alternative formulation of the differential geometry of a manifold in the form of the Riemann curvature tensor. There are several approaches to defining tensors, although seemingly different, the approaches just describe the same geometric concept using different languages and at different levels of abstraction. For example, an operator is represented in a basis as a two-dimensional square n × n array. The numbers in the array are known as the scalar components of the tensor or simply its components. They are denoted by giving their position in the array, as subscripts and superscripts. For example, the components of an order 2 tensor T could be denoted Tij , whether an index is displayed as a superscript or subscript depends on the transformation properties of the tensor, described below. The total number of required to identify each component uniquely is equal to the dimension of the array. However, the term generally has another meaning in the context of matrices. Just as the components of a change when we change the basis of the vector space. Each tensor comes equipped with a law that details how the components of the tensor respond to a change of basis
32.
International Standard Book Number
–
The International Standard Book Number is a unique numeric commercial book identifier. An ISBN is assigned to each edition and variation of a book, for example, an e-book, a paperback and a hardcover edition of the same book would each have a different ISBN. The ISBN is 13 digits long if assigned on or after 1 January 2007, the method of assigning an ISBN is nation-based and varies from country to country, often depending on how large the publishing industry is within a country. The initial ISBN configuration of recognition was generated in 1967 based upon the 9-digit Standard Book Numbering created in 1966, the 10-digit ISBN format was developed by the International Organization for Standardization and was published in 1970 as international standard ISO2108. Occasionally, a book may appear without a printed ISBN if it is printed privately or the author does not follow the usual ISBN procedure, however, this can be rectified later. Another identifier, the International Standard Serial Number, identifies periodical publications such as magazines, the ISBN configuration of recognition was generated in 1967 in the United Kingdom by David Whitaker and in 1968 in the US by Emery Koltay. The 10-digit ISBN format was developed by the International Organization for Standardization and was published in 1970 as international standard ISO2108, the United Kingdom continued to use the 9-digit SBN code until 1974. The ISO on-line facility only refers back to 1978, an SBN may be converted to an ISBN by prefixing the digit 0. For example, the edition of Mr. J. G. Reeder Returns, published by Hodder in 1965, has SBN340013818 -340 indicating the publisher,01381 their serial number. This can be converted to ISBN 0-340-01381-8, the check digit does not need to be re-calculated, since 1 January 2007, ISBNs have contained 13 digits, a format that is compatible with Bookland European Article Number EAN-13s. An ISBN is assigned to each edition and variation of a book, for example, an ebook, a paperback, and a hardcover edition of the same book would each have a different ISBN. The ISBN is 13 digits long if assigned on or after 1 January 2007, a 13-digit ISBN can be separated into its parts, and when this is done it is customary to separate the parts with hyphens or spaces. Separating the parts of a 10-digit ISBN is also done with either hyphens or spaces, figuring out how to correctly separate a given ISBN number is complicated, because most of the parts do not use a fixed number of digits. ISBN issuance is country-specific, in that ISBNs are issued by the ISBN registration agency that is responsible for country or territory regardless of the publication language. Some ISBN registration agencies are based in national libraries or within ministries of culture, in other cases, the ISBN registration service is provided by organisations such as bibliographic data providers that are not government funded. In Canada, ISBNs are issued at no cost with the purpose of encouraging Canadian culture. In the United Kingdom, United States, and some countries, where the service is provided by non-government-funded organisations. Australia, ISBNs are issued by the library services agency Thorpe-Bowker
33.
Mathematics
–
Mathematics is the study of topics such as quantity, structure, space, and change. There is a range of views among mathematicians and philosophers as to the exact scope, Mathematicians seek out patterns and use them to formulate new conjectures. Mathematicians resolve the truth or falsity of conjectures by mathematical proof, when mathematical structures are good models of real phenomena, then mathematical reasoning can provide insight or predictions about nature. Through the use of abstraction and logic, mathematics developed from counting, calculation, measurement, practical mathematics has been a human activity from as far back as written records exist. The research required to solve mathematical problems can take years or even centuries of sustained inquiry, rigorous arguments first appeared in Greek mathematics, most notably in Euclids Elements. Galileo Galilei said, The universe cannot be read until we have learned the language and it is written in mathematical language, and the letters are triangles, circles and other geometrical figures, without which means it is humanly impossible to comprehend a single word. Without these, one is wandering about in a dark labyrinth, carl Friedrich Gauss referred to mathematics as the Queen of the Sciences. Benjamin Peirce called mathematics the science that draws necessary conclusions, David Hilbert said of mathematics, We are not speaking here of arbitrariness in any sense. Mathematics is not like a game whose tasks are determined by arbitrarily stipulated rules, rather, it is a conceptual system possessing internal necessity that can only be so and by no means otherwise. Albert Einstein stated that as far as the laws of mathematics refer to reality, they are not certain, Mathematics is essential in many fields, including natural science, engineering, medicine, finance and the social sciences. Applied mathematics has led to entirely new mathematical disciplines, such as statistics, Mathematicians also engage in pure mathematics, or mathematics for its own sake, without having any application in mind. There is no clear line separating pure and applied mathematics, the history of mathematics can be seen as an ever-increasing series of abstractions. The earliest uses of mathematics were in trading, land measurement, painting and weaving patterns, in Babylonian mathematics elementary arithmetic first appears in the archaeological record. Numeracy pre-dated writing and numeral systems have many and diverse. Between 600 and 300 BC the Ancient Greeks began a study of mathematics in its own right with Greek mathematics. Mathematics has since been extended, and there has been a fruitful interaction between mathematics and science, to the benefit of both. Mathematical discoveries continue to be made today, the overwhelming majority of works in this ocean contain new mathematical theorems and their proofs. The word máthēma is derived from μανθάνω, while the modern Greek equivalent is μαθαίνω, in Greece, the word for mathematics came to have the narrower and more technical meaning mathematical study even in Classical times
34.
Coordinate system
–
The order of the coordinates is significant, and they are sometimes identified by their position in an ordered tuple and sometimes by a letter, as in the x-coordinate. The coordinates are taken to be real numbers in elementary mathematics, the use of a coordinate system allows problems in geometry to be translated into problems about numbers and vice versa, this is the basis of analytic geometry. The simplest example of a system is the identification of points on a line with real numbers using the number line. In this system, an arbitrary point O is chosen on a given line. The coordinate of a point P is defined as the distance from O to P. Each point is given a unique coordinate and each number is the coordinate of a unique point. The prototypical example of a system is the Cartesian coordinate system. In the plane, two lines are chosen and the coordinates of a point are taken to be the signed distances to the lines. In three dimensions, three perpendicular planes are chosen and the three coordinates of a point are the distances to each of the planes. This can be generalized to create n coordinates for any point in n-dimensional Euclidean space, depending on the direction and order of the coordinate axis the system may be a right-hand or a left-hand system. This is one of many coordinate systems, another common coordinate system for the plane is the polar coordinate system. A point is chosen as the pole and a ray from this point is taken as the polar axis, for a given angle θ, there is a single line through the pole whose angle with the polar axis is θ. Then there is a point on this line whose signed distance from the origin is r for given number r. For a given pair of coordinates there is a single point, for example, and are all polar coordinates for the same point. The pole is represented by for any value of θ, there are two common methods for extending the polar coordinate system to three dimensions. In the cylindrical coordinate system, a z-coordinate with the meaning as in Cartesian coordinates is added to the r and θ polar coordinates giving a triple. Spherical coordinates take this a further by converting the pair of cylindrical coordinates to polar coordinates giving a triple. A point in the plane may be represented in coordinates by a triple where x/z and y/z are the Cartesian coordinates of the point
35.
Euclidean geometry
–
Euclidean geometry is a mathematical system attributed to the Alexandrian Greek mathematician Euclid, which he described in his textbook on geometry, the Elements. Euclids method consists in assuming a set of intuitively appealing axioms. Although many of Euclids results had been stated by earlier mathematicians, Euclid was the first to show how these propositions could fit into a comprehensive deductive and logical system. The Elements begins with plane geometry, still taught in school as the first axiomatic system. It goes on to the geometry of three dimensions. Much of the Elements states results of what are now called algebra and number theory, for more than two thousand years, the adjective Euclidean was unnecessary because no other sort of geometry had been conceived. Euclids axioms seemed so obvious that any theorem proved from them was deemed true in an absolute, often metaphysical. Today, however, many other self-consistent non-Euclidean geometries are known, Euclidean geometry is an example of synthetic geometry, in that it proceeds logically from axioms to propositions without the use of coordinates. This is in contrast to analytic geometry, which uses coordinates, the Elements is mainly a systematization of earlier knowledge of geometry. Its improvement over earlier treatments was recognized, with the result that there was little interest in preserving the earlier ones. There are 13 total books in the Elements, Books I–IV, Books V and VII–X deal with number theory, with numbers treated geometrically via their representation as line segments with various lengths. Notions such as numbers and rational and irrational numbers are introduced. The infinitude of prime numbers is proved, a typical result is the 1,3 ratio between the volume of a cone and a cylinder with the same height and base. Euclidean geometry is a system, in which all theorems are derived from a small number of axioms. To produce a straight line continuously in a straight line. To describe a circle with any centre and distance and that all right angles are equal to one another. Although Euclids statement of the only explicitly asserts the existence of the constructions. The Elements also include the five common notions, Things that are equal to the same thing are also equal to one another
36.
Tensor algebra
–
In mathematics, the tensor algebra of a vector space V, denoted T or T •, is the algebra of tensors on V with multiplication being the tensor product. The tensor algebra is important because many other algebras arise as quotient algebras of T and these include the exterior algebra, the symmetric algebra, Clifford algebras, the Weyl algebra and universal enveloping algebras. Note, In this article, all algebras are assumed to be unital, the unit is explicitly required to define the coproduct. Let V be a space over a field K. For any nonnegative integer k, we define the kth power of V to be the tensor product of V with itself k times. That is, TkV consists of all tensors on V of order k, by convention T0V is the ground field K. We then construct T as the sum of TkV for k =0,1,2, … T = ⨁ k =0 ∞ T k V = K ⊕ V ⊕ ⊕ ⊕ ⋯. The multiplication in T is determined by the canonical isomorphism T k V ⊗ T ℓ V → T k + ℓ V given by the tensor product and this multiplication rule implies that the tensor algebra T is naturally a graded algebra with TkV serving as the grade-k subspace. This grading can be extended to a Z grading by appending subspaces T k V = for negative integers k, the construction generalizes in straightforward manner to the tensor algebra of any module M over a commutative ring. If R is a ring, one can still perform the construction for any R-R bimodule M. The tensor algebra T is also called the free algebra on the vector space V, as with other free constructions, the functor T is left adjoint to some forgetful functor. In this case, its the functor sends each K-algebra to its underlying vector space. One can, in fact, define the tensor algebra T as the unique algebra satisfying this property, the above universal property shows that the construction of the tensor algebra is functorial in nature. That is, T is a functor from K-Vect, the category of vector spaces over K, to K-Alg, the functoriality of T means that any linear map from V to W extends uniquely to an algebra homomorphism from T to T. If V has finite dimension n, another way of looking at the tensor algebra is as the algebra of polynomials over K in n non-commuting variables. If we take basis vectors for V, those become non-commuting variables in T, subject to no constraints beyond associativity, examples of this are the exterior algebra, the symmetric algebra, Clifford algebras, the Weyl algebra and universal enveloping algebras. The tensor algebra has two different coalgebra structures, one is compatible with the tensor product, and thus can be extended to a bialgebra, and can be further be extended with an antipode to a Hopf algebra structure. The other structure, although simpler, cannot be extended to a bialgebra, the first structure is developed immediately below, the second structure is given in the section on the cofree coalgebra, further down