1.
Mathematics
–
Mathematics is the study of topics such as quantity, structure, space, and change. There is a range of views among mathematicians and philosophers as to the exact scope, Mathematicians seek out patterns and use them to formulate new conjectures. Mathematicians resolve the truth or falsity of conjectures by mathematical proof, when mathematical structures are good models of real phenomena, then mathematical reasoning can provide insight or predictions about nature. Through the use of abstraction and logic, mathematics developed from counting, calculation, measurement, practical mathematics has been a human activity from as far back as written records exist. The research required to solve mathematical problems can take years or even centuries of sustained inquiry, rigorous arguments first appeared in Greek mathematics, most notably in Euclids Elements. Galileo Galilei said, The universe cannot be read until we have learned the language and it is written in mathematical language, and the letters are triangles, circles and other geometrical figures, without which means it is humanly impossible to comprehend a single word. Without these, one is wandering about in a dark labyrinth, carl Friedrich Gauss referred to mathematics as the Queen of the Sciences. Benjamin Peirce called mathematics the science that draws necessary conclusions, David Hilbert said of mathematics, We are not speaking here of arbitrariness in any sense. Mathematics is not like a game whose tasks are determined by arbitrarily stipulated rules, rather, it is a conceptual system possessing internal necessity that can only be so and by no means otherwise. Albert Einstein stated that as far as the laws of mathematics refer to reality, they are not certain, Mathematics is essential in many fields, including natural science, engineering, medicine, finance and the social sciences. Applied mathematics has led to entirely new mathematical disciplines, such as statistics, Mathematicians also engage in pure mathematics, or mathematics for its own sake, without having any application in mind. There is no clear line separating pure and applied mathematics, the history of mathematics can be seen as an ever-increasing series of abstractions. The earliest uses of mathematics were in trading, land measurement, painting and weaving patterns, in Babylonian mathematics elementary arithmetic first appears in the archaeological record. Numeracy pre-dated writing and numeral systems have many and diverse. Between 600 and 300 BC the Ancient Greeks began a study of mathematics in its own right with Greek mathematics. Mathematics has since been extended, and there has been a fruitful interaction between mathematics and science, to the benefit of both. Mathematical discoveries continue to be made today, the overwhelming majority of works in this ocean contain new mathematical theorems and their proofs. The word máthēma is derived from μανθάνω, while the modern Greek equivalent is μαθαίνω, in Greece, the word for mathematics came to have the narrower and more technical meaning mathematical study even in Classical times

2.
Topological vector space
–
In mathematics, a topological vector space is one of the basic structures investigated in functional analysis. As the name suggests the space blends a topological structure with the concept of a vector space. Hilbert spaces and Banach spaces are well-known examples, unless stated otherwise, the underlying field of a topological vector space is assumed to be either the complex numbers C or the real numbers R. Some authors require the topology on X to be T1, it follows that the space is Hausdorff. The topological and linear algebraic structures can be tied together even more closely with additional assumptions, the category of topological vector spaces over a given topological field K is commonly denoted TVSK or TVectK. The objects are the vector spaces over K and the morphisms are the continuous K-linear maps from one object to another. Every normed vector space has a topological structure, the norm induces a metric. This is a vector space because, The vector addition +, V × V → V is jointly continuous with respect to this topology. This follows directly from the triangle inequality obeyed by the norm, the scalar multiplication ·, K × V → V, where K is the underlying scalar field of V, is jointly continuous. This follows from the inequality and homogeneity of the norm. Therefore, all Banach spaces and Hilbert spaces are examples of vector spaces. There are topological spaces whose topology is not induced by a norm. These are all examples of Montel spaces, an infinite-dimensional Montel space is never normable. A topological field is a vector space over each of its subfields. A cartesian product of a family of vector spaces, when endowed with the product topology, is a topological vector space. For instance, the set X of all functions f, R → R, with this topology, X becomes a topological vector space, called the space of pointwise convergence. The reason for this name is the following, if is a sequence of elements in X, then fn has limit f in X if and only if fn has limit f for every real number x. This space is complete, but not normable, indeed, every neighborhood of 0 in the topology contains lines

3.
Unit ball
–
Usually a specific point has been distinguished as the origin of the space under study and it is understood that a unit sphere or unit ball is centered at that point. Therefore one speaks of the ball or the unit sphere. For example, a sphere is the surface of what is commonly called a circle, while such a circles interior. Similarly, a sphere is the surface of the Euclidean solid known colloquially as a sphere, while the interior. A unit sphere is simply a sphere of radius one, the importance of the unit sphere is that any sphere can be transformed to a unit sphere by a combination of translation and scaling. In this way the properties of spheres in general can be reduced to the study of the unit sphere. In Euclidean space of n dimensions, the sphere is the set of all points which satisfy the equation x 12 + x 22 + ⋯ + x n 2 =1. The volume of the ball in n dimensions, which we denote Vn. It is V n = π n /2 Γ = { π n /2 /, I f n ≥0 i s e v e n, π ⌊ n /2 ⌋2 ⌈ n /2 ⌉ / n. I f n ≥0 i s o d d, where n. is the double factorial, the surface areas and the volumes for some values of n are as follows, where the decimal expanded values for n ≥2 are rounded to the displayed precision. The An values satisfy the recursion, A0 =0 A1 =2 A2 =2 π A n =2 π n −2 A n −2 for n >2. The Vn values satisfy the recursion, V0 =1 V1 =2 V n =2 π n V n −2 for n >1. The surface area of a sphere with radius r is An rn−1. For instance, the area is A = 4π r 2 for the surface of the ball of radius r. The volume is V = 4π r 3 /3 for the ball of radius r. More precisely, the unit ball in a normed vector space V. It is the interior of the unit ball of. The latter is the disjoint union of the former and their common border, the shape of the unit ball is entirely dependent on the chosen norm, it may well have corners, and for example may look like n, in the case of the norm l∞ in Rn

4.
Convex set
–
In convex geometry, a convex set is a subset of an affine space that is closed under convex combinations. For example, a cube is a convex set, but anything that is hollow or has an indent, for example. The boundary of a set is always a convex curve. The intersection of all convex sets containing a given subset A of Euclidean space is called the hull of A. It is the smallest convex set containing A, a convex function is a real-valued function defined on an interval with the property that its epigraph is a convex set. Convex minimization is a subfield of optimization that studies the problem of minimizing convex functions over convex sets, the branch of mathematics devoted to the study of properties of convex sets and convex functions is called convex analysis. The notion of a set can be generalized as described below. Let S be a space over the real numbers, or, more generally. A set C in S is said to be if, for all x and y in C and all t in the interval. In other words, every point on the segment connecting x and y is in C. This implies that a set in a real or complex topological vector space is path-connected. Furthermore, C is strictly convex if every point on the segment connecting x and y other than the endpoints is inside the interior of C. A set C is called convex if it is convex. The convex subsets of R are simply the intervals of R, some examples of convex subsets of the Euclidean plane are solid regular polygons, solid triangles, and intersections of solid triangles. Some examples of convex subsets of a Euclidean 3-dimensional space are the Archimedean solids, the Kepler-Poinsot polyhedra are examples of non-convex sets. A set that is not convex is called a non-convex set, the complement of a convex set, such as the epigraph of a concave function, is sometimes called a reverse convex set, especially in the context of mathematical optimization. If S is a set in n-dimensional space, then for any collection of r, r >1. Ur in S, and for any nonnegative numbers λ1, + λr =1, then one has, ∑ k =1 r λ k u k ∈ S

5.
Boundary (topology)
–
In topology and mathematics in general, the boundary of a subset S of a topological space X is the set of points which can be approached both from S and from the outside of S. More precisely, it is the set of points in the closure of S, an element of the boundary of S is called a boundary point of S. The term boundary operation refers to finding or taking the boundary of a set, notations used for boundary of a set S include bd, fr, and ∂S. Some authors use the term instead of boundary in an attempt to avoid confusion with the concept of boundary used in algebraic topology. However, frontier sometimes refers to a different set, which is the set of points which are not actually in the set. A connected component of the boundary of S is called a component of S. If the set consists of points only, then the set has only a boundary. There are several definitions to the boundary of a subset S of a topological space X. The intersection of the closure of S with the closure of its complement, the set of points p of X such that every neighborhood of p contains at least one point of S and at least one point not of S. Consider the real line R with the usual topology, one has ∂ = ∂ = ∂ = ∂∅ = ∅ ∂Q = R ∂ = These last two examples illustrate the fact that the boundary of a dense set with empty interior is its closure. In the space of rational numbers with the topology, the boundary of. The boundary of a set is a topological notion and may change if one changes the topology, for example, given the usual topology on R2, the boundary of a closed disk Ω = is the disks surrounding circle, ∂Ω =. If the disk is viewed as a set in R3 with its own usual topology, i. e. Ω =, then the boundary of the disk is the disk itself, ∂Ω = Ω. If the disk is viewed as its own space, then the boundary of the disk is empty. The boundary of a set is closed, the boundary of the interior of a set as well as the boundary of the closure of a set are both contained in the boundary of the set. A set is the boundary of some open set if and only if it is closed, the boundary of a set is the boundary of the complement of the set, ∂S = ∂. The interior of the boundary of a set is the empty set. Hence, p is a point of a set if and only if every neighborhood of p contains at least one point in the set

6.
Affine line
–
A Euclidean space is an affine space over the reals, equipped with a metric, the Euclidean distance. Therefore, in Euclidean geometry, a property is a property that may be proved in affine spaces. In an affine space, there is no distinguished point that serves as an origin, hence, no vector has a fixed origin and no vector can be uniquely associated to a point. In an affine space, there are instead displacement vectors, also called translation vectors or simply translations, thus it makes sense to subtract two points of the space, giving a translation vector, but it does not make sense to add two points of the space. Likewise, it makes sense to add a displacement vector to a point of an affine space, Any vector space may be considered as an affine space, and this amounts to forgetting the special role played by the zero vector. In this case, the elements of the space may be viewed either as points of the affine space or as displacement vectors or translations. When considered as a point, the vector is called the origin. Adding a fixed vector to the elements of a subspace of a vector space produces an affine subspace. One commonly says that this affine subspace has been obtained by translating the linear subspace by the translation vector, in finite dimensions, such an affine subspace is the solution set of an inhomogeneous linear system. The displacement vectors for that space are the solutions of the corresponding homogeneous linear system. Linear subspaces, in contrast, always contain the origin of the vector space, the dimension of an affine space is defined as the dimension of the vector space of its translations. An affine space of one is an affine line. An affine space of dimension 2 is an affine plane, an affine subspace of dimension n –1 in an affine space or a vector space of dimension n is an affine hyperplane. The following characterization may be easier to understand than the formal definition. Imagine that Alice knows that a point is the actual origin. Two vectors, a and b, are to be added, similarly, Alice and Bob may evaluate any linear combination of a and b, or of any finite set of vectors, and will generally get different answers. However, if the sum of the coefficients in a combination is 1, then Alice. If Alice travels to λa + b then Bob can similarly travel to p + λ + = λa + b, under this condition, for all coefficients λ + =1, Alice and Bob describe the same point with the same linear combination, despite using different origins

7.
Inner product space
–
In linear algebra, an inner product space is a vector space with an additional structure called an inner product. This additional structure associates each pair of vectors in the space with a quantity known as the inner product of the vectors. Inner products allow the introduction of intuitive geometrical notions such as the length of a vector or the angle between two vectors. They also provide the means of defining orthogonality between vectors, inner product spaces generalize Euclidean spaces to vector spaces of any dimension, and are studied in functional analysis. An inner product induces a associated norm, thus an inner product space is also a normed vector space. A complete space with a product is called a Hilbert space. An space with a product is called a pre-Hilbert space, since its completion with respect to the norm induced by the inner product is a Hilbert space. Inner product spaces over the field of numbers are sometimes referred to as unitary spaces. In this article, the field of scalars denoted F is either the field of real numbers R or the field of complex numbers C, formally, an inner product space is a vector space V over the field F together with an inner product, i. e. Some authors, especially in physics and matrix algebra, prefer to define the inner product, then the first argument becomes conjugate linear, rather than the second. In those disciplines we would write the product ⟨ x, y ⟩ as ⟨ y | x ⟩, respectively y † x. Here the kets and columns are identified with the vectors of V and this reverse order is now occasionally followed in the more abstract literature, taking ⟨ x, y ⟩ to be conjugate linear in x rather than y. A few instead find a ground by recognizing both ⟨ ⋅, ⋅ ⟩ and ⟨ ⋅ | ⋅ ⟩ as distinct notations differing only in which argument is conjugate linear. There are various reasons why it is necessary to restrict the basefield to R and C in the definition. Briefly, the basefield has to contain an ordered subfield in order for non-negativity to make sense, the basefield has to have additional structure, such as a distinguished automorphism. More generally any quadratically closed subfield of R or C will suffice for this purpose, however in these cases when it is a proper subfield even finite-dimensional inner product spaces will fail to be metrically complete. In contrast all finite-dimensional inner product spaces over R or C, such as used in quantum computation, are automatically metrically complete. In some cases we need to consider non-negative semi-definite sesquilinear forms and this means that ⟨ x, x ⟩ is only required to be non-negative

8.
Banach space
–
In mathematics, more specifically in functional analysis, a Banach space is a complete normed vector space. Banach spaces are named after the Polish mathematician Stefan Banach, who introduced this concept and studied it systematically in 1920–1922 along with Hans Hahn, Banach spaces originally grew out of the study of function spaces by Hilbert, Fréchet, and Riesz earlier in the century. Banach spaces play a role in functional analysis. In other areas of analysis, the spaces under study are often Banach spaces, the vector space structure allows one to relate the behavior of Cauchy sequences to that of converging series of vectors. All norms on a vector space are equivalent. Every finite-dimensional normed space over R or C is a Banach space, if X and Y are normed spaces over the same ground field K, the set of all continuous K-linear maps T, X → Y is denoted by B. In infinite-dimensional spaces, not all maps are continuous. For Y a Banach space, the space B is a Banach space with respect to this norm, if X is a Banach space, the space B = B forms a unital Banach algebra, the multiplication operation is given by the composition of linear maps. If X and Y are normed spaces, they are isomorphic normed spaces if there exists a linear bijection T, X → Y such that T, if one of the two spaces X or Y is complete then so is the other space. Two normed spaces X and Y are isometrically isomorphic if in addition, T is an isometry, the Banach–Mazur distance d between two isomorphic but not isometric spaces X and Y gives a measure of how much the two spaces X and Y differ. Every normed space X can be embedded in a Banach space. More precisely, there is a Banach space Y and an isometric mapping T, X → Y such that T is dense in Y. If Z is another Banach space such that there is an isomorphism from X onto a dense subset of Z. This Banach space Y is the completion of the normed space X, the underlying metric space for Y is the same as the metric completion of X, with the vector space operations extended from X to Y. The completion of X is often denoted by X ^, the cartesian product X × Y of two normed spaces is not canonically equipped with a norm. However, several equivalent norms are used, such as ∥ ∥1 = ∥ x ∥ + ∥ y ∥, ∥ ∥ ∞ = max. In this sense, the product X × Y is complete if and only if the two factors are complete. If M is a linear subspace of a normed space X, there is a natural norm on the quotient space X / M

9.
Euclidean space
–
In geometry, Euclidean space encompasses the two-dimensional Euclidean plane, the three-dimensional space of Euclidean geometry, and certain other spaces. It is named after the Ancient Greek mathematician Euclid of Alexandria, the term Euclidean distinguishes these spaces from other types of spaces considered in modern geometry. Euclidean spaces also generalize to higher dimensions, classical Greek geometry defined the Euclidean plane and Euclidean three-dimensional space using certain postulates, while the other properties of these spaces were deduced as theorems. Geometric constructions are used to define rational numbers. It means that points of the space are specified with collections of real numbers and this approach brings the tools of algebra and calculus to bear on questions of geometry and has the advantage that it generalizes easily to Euclidean spaces of more than three dimensions. From the modern viewpoint, there is only one Euclidean space of each dimension. With Cartesian coordinates it is modelled by the coordinate space of the same dimension. In one dimension, this is the line, in two dimensions, it is the Cartesian plane, and in higher dimensions it is a coordinate space with three or more real number coordinates. One way to think of the Euclidean plane is as a set of points satisfying certain relationships, expressible in terms of distance, for example, there are two fundamental operations on the plane. One is translation, which means a shifting of the plane so that point is shifted in the same direction. The other is rotation about a point in the plane. In order to all of this mathematically precise, the theory must clearly define the notions of distance, angle, translation. Even when used in theories, Euclidean space is an abstraction detached from actual physical locations, specific reference frames, measurement instruments. The standard way to such space, as carried out in the remainder of this article, is to define the Euclidean plane as a two-dimensional real vector space equipped with an inner product. The reason for working with vector spaces instead of Rn is that it is often preferable to work in a coordinate-free manner. Once the Euclidean plane has been described in language, it is actually a simple matter to extend its concept to arbitrary dimensions. For the most part, the vocabulary, formulae, and calculations are not made any more difficult by the presence of more dimensions. Intuitively, the distinction says merely that there is no choice of where the origin should go in the space

10.
Hilbert space
–
The mathematical concept of a Hilbert space, named after David Hilbert, generalizes the notion of Euclidean space. It extends the methods of algebra and calculus from the two-dimensional Euclidean plane. A Hilbert space is a vector space possessing the structure of an inner product that allows length. Furthermore, Hilbert spaces are complete, there are limits in the space to allow the techniques of calculus to be used. Hilbert spaces arise naturally and frequently in mathematics and physics, typically as infinite-dimensional function spaces, the earliest Hilbert spaces were studied from this point of view in the first decade of the 20th century by David Hilbert, Erhard Schmidt, and Frigyes Riesz. They are indispensable tools in the theories of partial differential equations, quantum mechanics, Fourier analysis —and ergodic theory, john von Neumann coined the term Hilbert space for the abstract concept that underlies many of these diverse applications. The success of Hilbert space methods ushered in a very fruitful era for functional analysis, geometric intuition plays an important role in many aspects of Hilbert space theory. Exact analogs of the Pythagorean theorem and parallelogram law hold in a Hilbert space, at a deeper level, perpendicular projection onto a subspace plays a significant role in optimization problems and other aspects of the theory. An element of a Hilbert space can be specified by its coordinates with respect to a set of coordinate axes. When that set of axes is countably infinite, this means that the Hilbert space can also usefully be thought of in terms of the space of sequences that are square-summable. The latter space is often in the literature referred to as the Hilbert space. One of the most familiar examples of a Hilbert space is the Euclidean space consisting of vectors, denoted by ℝ3. The dot product takes two vectors x and y, and produces a real number x·y, If x and y are represented in Cartesian coordinates, then the dot product is defined by ⋅ = x 1 y 1 + x 2 y 2 + x 3 y 3. The dot product satisfies the properties, It is symmetric in x and y, x · y = y · x. It is linear in its first argument, · y = ax1 · y + bx2 · y for any scalars a, b, and vectors x1, x2, and y. It is positive definite, for all x, x · x ≥0, with equality if. An operation on pairs of vectors that, like the dot product, a vector space equipped with such an inner product is known as a inner product space. Every finite-dimensional inner product space is also a Hilbert space, multivariable calculus in Euclidean space relies on the ability to compute limits, and to have useful criteria for concluding that limits exist

11.
Polarization identity
–
In mathematics, the polarization identity is any one of a family of formulas that express the inner product of two vectors in terms of the norm of a normed vector space. Let ∥ x ∥ denote the norm of x and ⟨ x, y ⟩ the inner product of vectors x and y. The various forms given below are all related by the law,2 ∥ u ∥2 +2 ∥ v ∥2 = ∥ u + v ∥2 + ∥ u − v ∥2. The polarization identity can be generalized to other contexts in abstract algebra, linear algebra. If V is a vector space, then the inner product is defined by the polarization identity ⟨ x, y ⟩ =14 ∀ x, y ∈ V. If V is a vector space the inner product is given by the polarization identity, ⟨ x, y ⟩ =14 ∀ x, y ∈ V. Note that this defines a product which is linear in its first. To adjust for contrary definition, one needs to take the complex conjugate, a special case is an inner product given by the dot product, the so-called standard or Euclidean inner product. In this case, common forms of the identity include, u ⋅ v =12, u ⋅ v =12, u ⋅ v =14. The second form of the identity can be written as ∥ u − v ∥2 = ∥ u ∥2 + ∥ v ∥2 −2. This is essentially a form of the law of cosines for the triangle formed by the vectors u, v. In particular, u ⋅ v = ∥ u ∥ ∥ v ∥ cos θ, the basic relation between the norm and the dot product is given by the equation ∥ v ∥2 = v ⋅ v. Forms and of the polarization identity now follow by solving equations for u · v. In linear algebra, the identity applies to any norm on a vector space defined in terms of an inner product by the equation ∥ v ∥ = ⟨ v, v ⟩. This inequality ensures that the magnitude of the above defined cosine ≤1, the choice of the cosine function ensures that when ⟨ u, v ⟩ =0, the angle θ = π/2 or -π/2, where the sign is determined by an orientation on the vector space. In this case, the identities become ⟨ u, v ⟩ =12, ⟨ u, v ⟩ =12, ⟨ u, v ⟩ =14. Conversely, if a norm on a space satisfies the parallelogram law. In functional analysis, introduction of an inner product norm like this often is used to make a Banach space into a Hilbert space, the polarization identities are not restricted to inner products

12.
Norm (mathematics)
–
A seminorm, on the other hand, is allowed to assign zero length to some non-zero vectors. A norm must also satisfy certain properties pertaining to scalability and additivity which are given in the definition below. A simple example is the 2-dimensional Euclidean space R2 equipped with the Euclidean norm, elements in this vector space are usually drawn as arrows in a 2-dimensional cartesian coordinate system starting at the origin. The Euclidean norm assigns to each vector the length of its arrow, because of this, the Euclidean norm is often known as the magnitude. A vector space on which a norm is defined is called a vector space. Similarly, a space with a seminorm is called a seminormed vector space. It is often possible to supply a norm for a vector space in more than one way. If p =0 then v is the zero vector, by the first axiom, absolute homogeneity, we have p =0 and p = p, so that by the triangle inequality p ≥0. A seminorm on V is a p, V → R with the properties 1. and 2. Every vector space V with seminorm p induces a normed space V/W, called the quotient space, the induced norm on V/W is clearly well-defined and is given by, p = p. A topological vector space is called if the topology of the space can be induced by a norm. If a norm p, V → R is given on a vector space V then the norm of a vector v ∈ V is usually denoted by enclosing it within double vertical lines, such notation is also sometimes used if p is only a seminorm. For the length of a vector in Euclidean space, the notation | v | with single vertical lines is also widespread, in Unicode, the codepoint of the double vertical line character ‖ is U+2016. The double vertical line should not be confused with the parallel to symbol and this is usually not a problem because the former is used in parenthesis-like fashion, whereas the latter is used as an infix operator. The double vertical line used here should not be confused with the symbol used to denote lateral clicks. The single vertical line | is called vertical line in Unicode, the trivial seminorm has p =0 for all x in V. Every linear form f on a vector space defines a seminorm by x → | f |, the absolute value ∥ x ∥ = | x | is a norm on the one-dimensional vector spaces formed by the real or complex numbers. The absolute value norm is a case of the L1 norm

13.
Stereotype space
–
In functional analysis and related areas of mathematics stereotype spaces are topological vector spaces defined by a special variant of reflexivity condition. Each pseudocomplete barreled space X is stereotype, a metrizable locally convex space X is stereotype if and only if X is complete. Each infinite dimensional normed space X considered with the X ⋆ -weak topology is not stereotype, there exist stereotype spaces which are not Mackey spaces. Some simple connections between the properties of a stereotype space X and those of its dual space X ⋆ are expressed in the following list of regularities, the first results on this type of reflexivity of topological vector spaces were obtained by M. F. Smith in 1952. Further investigations were conducted by B. S. Brudovskii, W. C, waterhouse, K. Brauner, S. S. Akbarov, and E. T. Shavgulidze. Each locally convex space X can be transformed into a space with the help of the standard operations of pseudocompletion and pseudosaturation defined by the following two propositions. If X is a locally convex space, then its pseudosaturation X △ is stereotype. Dually, if X is a locally convex space, then its pseudocompletion X ▽ is stereotype. For arbitrary locally convex space X the spaces X △ ▽ and X ▽ △ are stereotype and it defines two natural tensor products X ⊛ Y, = Hom ⋆, X ⊙ Y, = Hom. This condition is weaker than the existence of the Schauder basis, the following proposition holds, If two stereotype spaces X and Y have the stereotype approximation property, then the spaces Hom, X ⊛ Y and X ⊙ Y have the stereotype approximation property as well. In particular, if X has the approximation property, then the same is true for X ⋆. This allows to reduce the list of counterexamples in comparison with the Banach theory, the arising theory of stereotype algebras allows to simplify constructions in the duality theories for non-commutative groups. In particular, the group algebras in these theories become Hopf algebras in the algebraic sense. Schaefer, Helmuth H. Topological vector spaces, Robertson, A. P. Robertson, W. J. Topological vector spaces. The Pontrjagin duality theorem in linear spaces, on k- and c-reflexivity of locally convex vector spaces. Brauner, K. Duals of Fréchet spaces and a generalization of the Banach-Dieudonné theorem, Akbarov, S. S. Pontryagin duality in the theory of topological vector spaces and in topological algebra. Akbarov, S. S. Holomorphic functions of exponential type, envelopes and refinements in categories, with applications to functional analysis. On two classes of spaces reflexive in the sense of Pontryagin, Akbarov, S. S. Pontryagin duality and topological algebras

14.
Dual space
–
In mathematics, any vector space V has a corresponding dual vector space consisting of all linear functionals on V together with a naturally induced linear structure. The dual space as defined above is defined for all vector spaces, when defined for a topological vector space, there is a subspace of the dual space, corresponding to continuous linear functionals, called the continuous dual space. Dual vector spaces find application in many branches of mathematics that use vector spaces, when applied to vector spaces of functions, dual spaces are used to describe measures, distributions, and Hilbert spaces. Consequently, the space is an important concept in functional analysis. Given any vector space V over a field F, the dual space V∗ is defined as the set of all linear maps φ, V → F, since linear maps are vector space homomorphisms, the dual space is also sometimes denoted by Hom. The dual space V∗ itself becomes a space over F when equipped with an addition and scalar multiplication satisfying, = φ + ψ = a for all φ and ψ ∈ V∗, x ∈ V. Elements of the dual space V∗ are sometimes called covectors or one-forms. The pairing of a functional φ in the dual space V∗ and this pairing defines a nondegenerate bilinear mapping ⟨·, ·⟩, V∗ × V → F called the natural pairing. If V is finite-dimensional, then V∗ has the dimension as V. Given a basis in V, it is possible to construct a basis in V∗. This dual basis is a set of linear functionals on V, defined by the relation e i = c i, i =1, …, n for any choice of coefficients ci ∈ F. In particular, letting in turn one of those coefficients be equal to one. For example, if V is R2, and its basis chosen to be, then e1 and e2 are one-forms such that e1 =1, e1 =0, e2 =0, and e2 =1. In particular, if we interpret Rn as the space of columns of n real numbers, such a row acts on Rn as a linear functional by ordinary matrix multiplication. One way to see this is that a functional maps every n-vector x into a number y. So an element of V∗ can be thought of as a particular family of parallel lines covering the plane. To compute the value of a functional on a given vector, or, informally, one counts how many lines the vector crosses. The dimension of R∞ is countably infinite, whereas RN does not have a countable basis, again the sum is finite because fα is nonzero for only finitely many α

15.
Operator topologies
–
In the mathematical field of functional analysis there are several standard topologies which are given to the algebra B of bounded linear operators on a Hilbert space H. Let be a sequence of operators on the Hilbert space H. Consider the statement that Tn converges to some operator T in H. This could have different meanings, If ∥ T n − T ∥ →0, that is. If T n x → T x for all x in H, finally, suppose T n x → T x in the weak topology of H. This means that F → F for all linear functionals F on H, in this case we say that T n → T in the weak operator topology. All of these notions make sense and are useful for a Banach space in place of the Hilbert space H, there are many topologies that can be defined on B besides the ones used above. These topologies are all convex, which implies that they are defined by a family of seminorms. In analysis, a topology is called if it has many open sets and weak if it has few open sets, so that the corresponding modes of convergence are, respectively, strong. The diagram on the right is a summary of the relations, the Banach space B has a predual B*, consisting of the trace class operators, whose dual is B. The seminorm pw for w positive in the predual is defined to be 1/2, If B is a vector space of linear maps on the vector space A, then σ is defined to be the weakest topology on A such that all elements of B are continuous. The norm topology or uniform topology or uniform operator topology is defined by the usual norm ||x|| on B and it is stronger than all the other topologies below. The weak topology is σ, in words the weakest topology such that all elements of the dual B* are continuous. It is the topology on the Banach space B. It is stronger than the ultraweak and weak operator topologies and it is stronger than the weak operator topology. The strong* operator topology or strong* topology is defined by the seminorms ||x|| and ||x*|| for h in H and it is stronger than the strong and weak operator topologies. The strong operator topology or strong topology is defined by the seminorms ||x|| for h in H and it is stronger than the weak operator topology. The weak operator topology or weak topology is defined by the seminorms || for h1, the continuous linear functionals on B for the weak, strong, and strong* topologies are the same, and are the finite linear combinations of the linear functionals for h1, h2 in H

16.
Linear map
–
In mathematics, a linear map is a mapping V → W between two modules that preserves the operations of addition and scalar multiplication. An important special case is when V = W, in case the map is called a linear operator, or an endomorphism of V. Sometimes the term linear function has the meaning as linear map. A linear map always maps linear subspaces onto linear subspaces, for instance it maps a plane through the origin to a plane, Linear maps can often be represented as matrices, and simple examples include rotation and reflection linear transformations. In the language of algebra, a linear map is a module homomorphism. In the language of category theory it is a morphism in the category of modules over a given ring, let V and W be vector spaces over the same field K. e. that for any vectors x1. Am ∈ K, the equality holds, f = a 1 f + ⋯ + a m f. It is then necessary to specify which of these fields is being used in the definition of linear. If V and W are considered as spaces over the field K as above, for example, the conjugation of complex numbers is an R-linear map C → C, but it is not C-linear. A linear map from V to K is called a linear functional and these statements generalize to any left-module RM over a ring R without modification, and to any right-module upon reversing of the scalar multiplication. The zero map between two left-modules over the ring is always linear. The identity map on any module is a linear operator, any homothecy centered in the origin of a vector space, v ↦ c v where c is a scalar, is a linear operator. This does not hold in general for modules, where such a map might only be semilinear, for real numbers, the map x ↦ x2 is not linear. Conversely, any map between finite-dimensional vector spaces can be represented in this manner, see the following section. Differentiation defines a map from the space of all differentiable functions to the space of all functions. It also defines an operator on the space of all smooth functions. If V and W are finite-dimensional vector spaces over a field F, then functions that send linear maps f, V → W to dimF × dimF matrices in the way described in the sequel are themselves linear maps. The expected value of a variable is linear, as for random variables X and Y we have E = E + E and E = aE

17.
Bilinear map
–
In mathematics, a bilinear map is a function combining elements of two vector spaces to yield an element of a third vector space, and is linear in each of its arguments. Let V, W and X be three vector spaces over the base field F. In other words, when we hold the first entry of the bilinear map fixed while letting the second entry vary, the result is a linear operator, and similarly for when we hold the second entry fixed. If V = W and we have B = B for all v, w in V, the case where X is the base field F, and we have a bilinear form, is particularly useful. The definition works without any changes if instead of vector spaces over a field F and it generalizes to n-ary functions, where the proper term is multilinear. This satisfies B = r ⋅ B B = B ⋅ s for all m in M, n in N, r in R and s in S, a first immediate consequence of the definition is that B = 0X whenever v = 0V or w = 0W. This may be seen by writing the zero vector 0X as 0 ⋅ 0X and moving the scalar 0 outside, in front of B, the set L of all bilinear maps is a linear subspace of the space of all maps from V × W into X. If V, W, X are finite-dimensional, then so is L, for X = F, i. e. bilinear forms, the dimension of this space is dim V × dim W. To see this, choose a basis for V and W, then each bilinear map can be represented by the matrix B. Now, if X is a space of dimension, we obviously have dim L = dim V × dim W × dim X. Matrix multiplication is a bilinear map M × M → M. If a vector space V over the real numbers R carries an inner product, in general, for a vector space V over a field F, a bilinear form on V is the same as a bilinear map V × V → F. If V is a space with dual space V∗, then the application operator. Let V and W be vector spaces over the base field F. If f is a member of V∗ and g a member of W∗, the cross product in R3 is a bilinear map R3 × R3 → R3. Let B, V × W → X be a bilinear map, and L, U → W be a linear map, then ↦ B is a bilinear map on V × U

18.
Linear form
–
In linear algebra, a linear functional or linear form is a linear map from a vector space to its field of scalars. The set of all linear functionals from V to k, Homk, forms a space over k with the addition of the operations of addition. This space is called the space of V, or sometimes the algebraic dual space. It is often written V∗ or V′ when the field k is understood, if V is a topological vector space, the space of continuous linear functionals — the continuous dual — is often simply called the dual space. If V is a Banach space, then so is its dual, to distinguish the ordinary dual space from the continuous dual space, the former is sometimes called the algebraic dual. In finite dimensions, every linear functional is continuous, so the dual is the same as the algebraic dual. Suppose that vectors in the coordinate space Rn are represented as column vectors x → =. For each row there is a linear functional f defined by f = a 1 x 1 + ⋯ + a n x n. This is just the product of the row vector and the column vector x →, f =. Linear functionals first appeared in functional analysis, the study of spaces of functions. Let Pn denote the space of real-valued polynomial functions of degree ≤n defined on an interval. If c ∈, then let evc, Pn → R be the evaluation functional, the mapping f → f is linear since = f + g = α f. If x0, …, xn are n+1 distinct points in, then the evaluation functionals evxi, the integration functional I defined above defines a linear functional on the subspace Pn of polynomials of degree ≤ n. If x0, …, xn are n+1 distinct points in, then there are coefficients a0, … and this forms the foundation of the theory of numerical quadrature. This follows from the fact that the linear functionals evxi, f → f defined above form a basis of the space of Pn. Linear functionals are particularly important in quantum mechanics, quantum mechanical systems are represented by Hilbert spaces, which are anti–isomorphic to their own dual spaces. A state of a mechanical system can be identified with a linear functional. For more information see bra–ket notation, in the theory of generalized functions, certain kinds of generalized functions called distributions can be realized as linear functionals on spaces of test functions

19.
Spectrum of a C*-algebra
–
In mathematics, the spectrum of a C*-algebra or dual of a C*-algebra A, denoted Â, is the set of unitary equivalence classes of irreducible *-representations of A. A *-representation π of A on a Hilbert space H is irreducible if, and only if, there is no closed subspace K different from H and we implicitly assume that irreducible representation means non-null irreducible representation, thus excluding trivial representations on one-dimensional spaces. As explained below, the spectrum Â is also naturally a topological space, one of the most important applications of this concept is to provide a notion of dual object for any locally compact group. That the dual is not an invariant is easily seen as the dual of any finite-dimensional full matrix algebra Mn consists of a single point. The topology of Â can be defined in several equivalent ways and we first define it in terms of the primitive spectrum. The primitive spectrum of A is the set of primitive ideals Prim of A, the set of primitive ideals is a topological space with the hull-kernel topology. This is defined as follows, If X is a set of primitive ideals, hull-kernel closure is easily shown to be an idempotent operation, that is X ¯ ¯ = X ¯, and it can be shown to satisfy the Kuratowski closure axioms. As a consequence, it can be shown there is a unique topology τ on Prim such that the closure of a set X with respect to τ is identical to the hull-kernel closure of X. Since unitarily equivalent representations have the kernel, the map π ↦ ker factors through a surjective map k, A ^ → Prim . We use the map k to define the topology on Â as follows, the open sets of Â are inverse images k−1 of open subsets U of Prim. The hull-kernel topology is an analogue for non-commutative rings of the Zariski topology for commutative rings, the topology on Â induced from the hull-kernel topology has other characterizations in terms of states of A. The spectrum of a commutative C*-algebra A coincides with the Gelfand dual of A, in particular, suppose X is a compact Hausdorff space. Then there is a natural homeomorphism I, X ≅ Prim and this mapping is defined by I =. I is a maximal ideal in C so is in fact primitive. For details of the proof, see the Dixmier reference, for a commutative C*-algebra, A ^ ≅ Prim . Let H be a separable Hilbert space, L has two norm-closed *-ideals, I0 = and the ideal K = K of compact operators. Now is a subset of Prim. Thus Prim is a non-Hausdorff space, the spectrum of L on the other hand is much larger

20.
Singular value decomposition
–
In linear algebra, the singular value decomposition is a factorization of a real or complex matrix. It is the generalization of the eigendecomposition of a positive semidefinite normal matrix to any m × n matrix via an extension of polar decomposition and it has many useful applications in signal processing and statistics. The diagonal entries σ i of Σ are known as the values of M. The columns of U and the columns of V are called the left-singular vectors and right-singular vectors of M, the singular value decomposition can be computed using the following observations, The left-singular vectors of M are a set of orthonormal eigenvectors of MM∗. The right-singular vectors of M are a set of eigenvectors of M∗M. The non-zero singular values of M are the roots of the non-zero eigenvalues of both M∗M and MM∗. Suppose M is a m × n matrix whose entries come from the field K, V∗ is the conjugate transpose of the n × n unitary matrix, V, thus also unitary. The diagonal entries σi of Σ are known as the values of M. A common convention is to list the singular values in descending order, in this case, the diagonal matrix, Σ, is uniquely determined by M. Thus the expression UΣV∗ can be interpreted as a composition of three geometrical transformations, a rotation or reflection, a scaling, and another rotation or reflection. For instance, the figure above explains how a matrix can be described as such a sequence. If the rotation is done first, M = PR, then R is the same and P = UΣU∗ has the same eigenvalues and this shows that the SVD is a generalization of the eigenvalue decomposition of pure stretches in orthogonal directions to arbitrary matrices which both stretch and rotate. As shown in the figure, the values can be interpreted as the semiaxes of an ellipse in 2D. This concept can be generalized to n-dimensional Euclidean space, with the values of any n × n square matrix being viewed as the semiaxes of an n-dimensional ellipsoid. Since U and V∗ are unitary, the columns of each of them form a set of orthonormal vectors, the matrix M maps the basis vector Vi to the stretched unit vector σi Ui. By the definition of a matrix, the same is true for their conjugate transposes U∗ and V. In short, the columns of U, U∗, V, and V∗ are orthonormal bases. Consider the 4 ×5 matrix M = A singular value decomposition of this matrix is given by UΣV∗ U = Σ = V ∗ = Notice Σ is zero outside of the diagonal and one diagonal element is zero

21.
Closed graph theorem
–
In mathematics, the closed graph theorem is a basic result which characterizes continuous functions in terms of their graphs. There are several versions of the theorem, in mathematics, there are several results known as the closed graph theorem. For any function T, X → Y, we define the graph of T to be the set. In point-set topology, the closed graph theorem states the following, If X is a space and Y is a compact Hausdorff space, then the graph of T is closed if. In the latter case we say that T is a closed operator, note that the operator is required to be everywhere-defined, i. e. the domain D of T is X. This condition is necessary, as there exist closed linear operators that are unbounded, the usual proof of the closed graph theorem employs the open mapping theorem. In fact, the closed graph theorem, the open mapping theorem, the closed graph theorem can be reformulated as follows. Under this condition, if f, X → Y is a map whose graph is closed then f is continuous. The Borel graph theorem, proved by L. Schwartz, shows that the closed graph theorem is valid for linear maps defined on and valued in most spaces encountered in Analysis. Recall that a space is called a Polish space if it is a separable complete metrizable space. The weak dual of a separable Fréchet space and the dual of a separable Fréchet-Montel space are Souslin spaces. Also, the space of distributions and all Lp-spaces over open subsets of Euclidean space as well as other spaces that occur in analysis are Souslin spaces. The Borel graph theorem sates, Let X and Y be locally convex Hausdorff spaces and let u, X → Y be linear. If X is the limit of an arbitrary family of Banach spaces, if Y is a Souslin space. An improvement upon this theorem, proved by A. Martineau, a topological space X is called a K σ δ if it is the countable intersection of countable unions of compact sets. A Hausdorff topological space Y is called K-analytic if it is the image of a K σ δ space. Every compact set is K-analytic so that there are non-separable K-analytic spaces, also, every Polish, Souslin, and reflexive Frechet space is K-analytic as is the weak dual of a Frechet space. The generalized theorems states, Let X and Y be locally convex Hausdorff spaces and let u, X → Y be linear

22.
Hyperplane separation theorem
–
In geometry, the hyperplane separation theorem is a theorem about disjoint convex sets in n-dimensional Euclidean space. There are several rather similar versions, in another version, if both disjoint convex sets are open, then there is a hyperplane in between them, but not necessarily any gap. An axis which is orthogonal to a separating hyperplane is a separating axis, the hyperplane separation theorem is due to Hermann Minkowski. The Hahn–Banach separation theorem generalizes the result to topological vector spaces, a related result is the supporting hyperplane theorem. In geometry, a maximum-margin hyperplane is a hyperplane which separates two clouds of points and is at distance from the two. The margin between the hyperplane and the clouds is maximal, see the article on Support Vector Machines for more details. The proof is based on the following lemma, Proof of lemma, Let x j be a sequence in K such that | x j | → δ. Note that /2 is in K since K is convex, ◻ Proof of theorem, Given disjoint nonempty convex sets A, B, let K = A + =. Since − B is convex and the sum of convex sets is convex, by the lemma, the closure K ¯ of K, which is convex, contains a vector v of minimum norm. Hence, for any x in A and y in B, we have, thus, if v is nonzero, the proof is complete since inf x ∈ A ⟨ x, v ⟩ ≥ | v |2 + sup y ∈ B ⟨ y, v ⟩. More generally, let us first take the case when the interior of K is nonempty, the interior can be exhausted by nonempty compact convex subsets K n, n =1,2, …. Since 0 is not in K, each K n contains a vector v n of minimum length and by the argument in the early part, we have. We can normalize the v n s to have length one, then the sequence v n contains a convergent subsequence with limit v, which is nonzero. We have ⟨ x, v ⟩ ≥0 for any x in the interior of K and we now finish the proof as before. Finally, if K has empty interior, the set that it spans has dimension less than that of the whole space. Consequently K is contained in some hyperplane ⟨ ⋅, v ⟩ = c, thus, ⟨ x, v ⟩ ≥ c for all x in K, ◻ The number of dimensions must be finite. In infinite-dimensional spaces there are examples of two closed, convex, disjoint sets which cannot be separated by a closed hyperplane even in the sense where the inequalities are not strict. The above proof also proves the first version of the mentioned in the lede Here