1.
Special orthogonal group
–
Equivalently, it is the group of n×n orthogonal matrices, where the group operation is given by matrix multiplication, an orthogonal matrix is a real matrix whose inverse equals its transpose. An important subgroup of O is the orthogonal group, denoted SO. This group is called the rotation group, because, in dimensions 2 and 3. In low dimension, these groups have been studied, see SO, SO and SO. This is a subgroup of the linear group GL given by O = where QT is the transpose of Q and I is the identity matrix. This article mainly discusses the groups of quadratic forms that may be expressed over some bases as the dot product, over the reals. Over the reals, for any quadratic form, there is a basis. Thus the orthogonal group depends only on the numbers of 1 and of −1, and is denoted O, for details, see indefinite orthogonal group. The derived subgroup Ω of O is an often studied object because, the Cartan–Dieudonné theorem describes the structure of the orthogonal group for a non-singular form. The determinant of any orthogonal matrix is either 1 or −1, the orthogonal n-by-n matrices with determinant 1 form a normal subgroup of O known as the special orthogonal group SO, consisting of all proper rotations. By analogy with GL–SL, the group is sometimes called the general orthogonal group and denoted GO. The term rotation group can be used to either the special or general orthogonal group. When this distinction is to be emphasized, the groups may be denoted O and O, reserving n for the dimension of the space. The letters p or r are also used, indicating the rank of the corresponding Lie algebra, in odd dimension the corresponding Lie algebra is s o, while in even dimension the Lie algebra is s o. In two dimensions, O is the group of all rotations about the origin and all reflections along a line through the origin, SO is the group of all rotations about the origin. These groups are related, SO is a subgroup of O of index 2. More generally, in any number of dimensions an even number of reflections gives a rotation, therefore, the rotations define a subgroup of O, but the reflections do not define a subgroup. A reflection through the origin may be generated as a combination of one reflection along each of the axes, the reflection through the origin is not a reflection in the usual sense in even dimensions, but rather a rotation
2.
Classical mechanics
–
In physics, classical mechanics is one of the two major sub-fields of mechanics, along with quantum mechanics. Classical mechanics is concerned with the set of physical laws describing the motion of bodies under the influence of a system of forces. The study of the motion of bodies is an ancient one, making classical mechanics one of the oldest and largest subjects in science, engineering and technology. Classical mechanics describes the motion of objects, from projectiles to parts of machinery, as well as astronomical objects, such as spacecraft, planets, stars. Within classical mechanics are fields of study that describe the behavior of solids, liquids and gases, Classical mechanics also provides extremely accurate results as long as the domain of study is restricted to large objects and the speeds involved do not approach the speed of light. When both quantum and classical mechanics cannot apply, such as at the level with high speeds. Since these aspects of physics were developed long before the emergence of quantum physics and relativity, however, a number of modern sources do include relativistic mechanics, which in their view represents classical mechanics in its most developed and accurate form. Later, more abstract and general methods were developed, leading to reformulations of classical mechanics known as Lagrangian mechanics and these advances were largely made in the 18th and 19th centuries, and they extend substantially beyond Newtons work, particularly through their use of analytical mechanics. The following introduces the concepts of classical mechanics. For simplicity, it often models real-world objects as point particles, the motion of a point particle is characterized by a small number of parameters, its position, mass, and the forces applied to it. Each of these parameters is discussed in turn, in reality, the kind of objects that classical mechanics can describe always have a non-zero size. Objects with non-zero size have more complicated behavior than hypothetical point particles, because of the degrees of freedom. However, the results for point particles can be used to such objects by treating them as composite objects. The center of mass of a composite object behaves like a point particle, Classical mechanics uses common-sense notions of how matter and forces exist and interact. It assumes that matter and energy have definite, knowable attributes such as where an object is in space, non-relativistic mechanics also assumes that forces act instantaneously. The position of a point particle is defined with respect to a fixed reference point in space called the origin O, in space. A simple coordinate system might describe the position of a point P by means of a designated as r. In general, the point particle need not be stationary relative to O, such that r is a function of t, the time
3.
Geometry
–
Geometry is a branch of mathematics concerned with questions of shape, size, relative position of figures, and the properties of space. A mathematician who works in the field of geometry is called a geometer, Geometry arose independently in a number of early cultures as a practical way for dealing with lengths, areas, and volumes. Geometry began to see elements of mathematical science emerging in the West as early as the 6th century BC. By the 3rd century BC, geometry was put into a form by Euclid, whose treatment, Euclids Elements. Geometry arose independently in India, with texts providing rules for geometric constructions appearing as early as the 3rd century BC, islamic scientists preserved Greek ideas and expanded on them during the Middle Ages. By the early 17th century, geometry had been put on a solid footing by mathematicians such as René Descartes. Since then, and into modern times, geometry has expanded into non-Euclidean geometry and manifolds, while geometry has evolved significantly throughout the years, there are some general concepts that are more or less fundamental to geometry. These include the concepts of points, lines, planes, surfaces, angles, contemporary geometry has many subfields, Euclidean geometry is geometry in its classical sense. The mandatory educational curriculum of the majority of nations includes the study of points, lines, planes, angles, triangles, congruence, similarity, solid figures, circles, Euclidean geometry also has applications in computer science, crystallography, and various branches of modern mathematics. Differential geometry uses techniques of calculus and linear algebra to problems in geometry. It has applications in physics, including in general relativity, topology is the field concerned with the properties of geometric objects that are unchanged by continuous mappings. In practice, this often means dealing with large-scale properties of spaces, convex geometry investigates convex shapes in the Euclidean space and its more abstract analogues, often using techniques of real analysis. It has close connections to convex analysis, optimization and functional analysis, algebraic geometry studies geometry through the use of multivariate polynomials and other algebraic techniques. It has applications in areas, including cryptography and string theory. Discrete geometry is concerned mainly with questions of relative position of simple objects, such as points. It shares many methods and principles with combinatorics, Geometry has applications to many fields, including art, architecture, physics, as well as to other branches of mathematics. The earliest recorded beginnings of geometry can be traced to ancient Mesopotamia, the earliest known texts on geometry are the Egyptian Rhind Papyrus and Moscow Papyrus, the Babylonian clay tablets such as Plimpton 322. For example, the Moscow Papyrus gives a formula for calculating the volume of a truncated pyramid, later clay tablets demonstrate that Babylonian astronomers implemented trapezoid procedures for computing Jupiters position and motion within time-velocity space
4.
Group (mathematics)
–
In mathematics, a group is an algebraic structure consisting of a set of elements equipped with an operation that combines any two elements to form a third element. The operation satisfies four conditions called the group axioms, namely closure and it allows entities with highly diverse mathematical origins in abstract algebra and beyond to be handled in a flexible way while retaining their essential structural aspects. The ubiquity of groups in areas within and outside mathematics makes them a central organizing principle of contemporary mathematics. Groups share a kinship with the notion of symmetry. The concept of a group arose from the study of polynomial equations, after contributions from other fields such as number theory and geometry, the group notion was generalized and firmly established around 1870. Modern group theory—an active mathematical discipline—studies groups in their own right, to explore groups, mathematicians have devised various notions to break groups into smaller, better-understandable pieces, such as subgroups, quotient groups and simple groups. A theory has developed for finite groups, which culminated with the classification of finite simple groups. Since the mid-1980s, geometric group theory, which studies finitely generated groups as objects, has become a particularly active area in group theory. One of the most familiar groups is the set of integers Z which consists of the numbers, −4, −3, −2, −1,0,1,2,3,4. The following properties of integer addition serve as a model for the group axioms given in the definition below. For any two integers a and b, the sum a + b is also an integer and that is, addition of integers always yields an integer. This property is known as closure under addition, for all integers a, b and c, + c = a +. Expressed in words, adding a to b first, and then adding the result to c gives the final result as adding a to the sum of b and c. If a is any integer, then 0 + a = a +0 = a, zero is called the identity element of addition because adding it to any integer returns the same integer. For every integer a, there is a b such that a + b = b + a =0. The integer b is called the element of the integer a and is denoted −a. The integers, together with the operation +, form a mathematical object belonging to a class sharing similar structural aspects. To appropriately understand these structures as a collective, the abstract definition is developed
5.
Rotation
–
A rotation is a circular movement of an object around a center of rotation. A three-dimensional object always rotates around a line called a rotation axis. If the axis passes through the center of mass, the body is said to rotate upon itself. A rotation about a point, e. g. the Earth about the Sun, is called a revolution or orbital revolution. The axis is called a pole, mathematically, a rotation is a rigid body movement which, unlike a translation, keeps a point fixed. This definition applies to rotations within both two and three dimensions All rigid body movements are rotations, translations, or combinations of the two, a rotation is simply a progressive radial orientation to a common point. That common point lies within the axis of that motion, the axis is 90 degrees perpendicular to the plane of the motion. If the axis of the rotation lies external of the body in question then the body is said to orbit, there is no fundamental difference between a “rotation” and an “orbit” and or spin. The key distinction is simply where the axis of the rotation lies and this distinction can be demonstrated for both “rigid” and “non rigid” bodies. If a rotation around a point or axis is followed by a rotation around the same point/axis. The reverse of a rotation is also a rotation, thus, the rotations around a point/axis form a group. However, a rotation around a point or axis and a rotation around a different point/axis may result in something other than a rotation, Rotations around the x, y and z axes are called principal rotations. Rotation around any axis can be performed by taking a rotation around the x axis, followed by a rotation around the y axis and that is to say, any spatial rotation can be decomposed into a combination of principal rotations. In flight dynamics, the rotations are known as yaw, pitch. This terminology is used in computer graphics. In astronomy, rotation is an observed phenomenon. Stars, planets and similar bodies all spin around on their axes, the rotation rate of planets in the solar system was first measured by tracking visual features. Stellar rotation is measured through Doppler shift or by tracking active surface features and this rotation induces a centrifugal acceleration in the reference frame of the Earth which slightly counteracts the effect of gravity the closer one is to the equator
6.
Origin (mathematics)
–
In mathematics, the origin of a Euclidean space is a special point, usually denoted by the letter O, used as a fixed point of reference for the geometry of the surrounding space. In physical problems, the choice of origin is often arbitrary and this allows one to pick an origin point that makes the mathematics as simple as possible, often by taking advantage of some kind of geometric symmetry. In a Cartesian coordinate system, the origin is the point where the axes of the system intersect, the origin divides each of these axes into two halves, a positive and a negative semiaxis. The coordinates of the origin are all zero, for example in two dimensions and in three. In a polar coordinate system, the origin may also be called the pole, in Euclidean geometry, the origin may be chosen freely as any convenient point of reference. The origin of the plane can be referred as the point where real axis. In other words, it is the number zero
7.
Three-dimensional space
–
Three-dimensional space is a geometric setting in which three values are required to determine the position of an element. This is the meaning of the term dimension. In physics and mathematics, a sequence of n numbers can be understood as a location in n-dimensional space, when n =3, the set of all such locations is called three-dimensional Euclidean space. It is commonly represented by the symbol ℝ3 and this serves as a three-parameter model of the physical universe in which all known matter exists. However, this space is one example of a large variety of spaces in three dimensions called 3-manifolds. Furthermore, in case, these three values can be labeled by any combination of three chosen from the terms width, height, depth, and breadth. In mathematics, analytic geometry describes every point in space by means of three coordinates. Three coordinate axes are given, each perpendicular to the two at the origin, the point at which they cross. They are usually labeled x, y, and z, below are images of the above-mentioned systems. Two distinct points determine a line. Three distinct points are either collinear or determine a unique plane, four distinct points can either be collinear, coplanar or determine the entire space. Two distinct lines can intersect, be parallel or be skew. Two parallel lines, or two intersecting lines, lie in a plane, so skew lines are lines that do not meet. Two distinct planes can either meet in a line or are parallel. Three distinct planes, no pair of which are parallel, can meet in a common line. In the last case, the three lines of intersection of each pair of planes are mutually parallel, a line can lie in a given plane, intersect that plane in a unique point or be parallel to the plane. In the last case, there will be lines in the plane that are parallel to the given line, a hyperplane is a subspace of one dimension less than the dimension of the full space. The hyperplanes of a space are the two-dimensional subspaces, that is
8.
Euclidean space
–
In geometry, Euclidean space encompasses the two-dimensional Euclidean plane, the three-dimensional space of Euclidean geometry, and certain other spaces. It is named after the Ancient Greek mathematician Euclid of Alexandria, the term Euclidean distinguishes these spaces from other types of spaces considered in modern geometry. Euclidean spaces also generalize to higher dimensions, classical Greek geometry defined the Euclidean plane and Euclidean three-dimensional space using certain postulates, while the other properties of these spaces were deduced as theorems. Geometric constructions are used to define rational numbers. It means that points of the space are specified with collections of real numbers and this approach brings the tools of algebra and calculus to bear on questions of geometry and has the advantage that it generalizes easily to Euclidean spaces of more than three dimensions. From the modern viewpoint, there is only one Euclidean space of each dimension. With Cartesian coordinates it is modelled by the coordinate space of the same dimension. In one dimension, this is the line, in two dimensions, it is the Cartesian plane, and in higher dimensions it is a coordinate space with three or more real number coordinates. One way to think of the Euclidean plane is as a set of points satisfying certain relationships, expressible in terms of distance, for example, there are two fundamental operations on the plane. One is translation, which means a shifting of the plane so that point is shifted in the same direction. The other is rotation about a point in the plane. In order to all of this mathematically precise, the theory must clearly define the notions of distance, angle, translation. Even when used in theories, Euclidean space is an abstraction detached from actual physical locations, specific reference frames, measurement instruments. The standard way to such space, as carried out in the remainder of this article, is to define the Euclidean plane as a two-dimensional real vector space equipped with an inner product. The reason for working with vector spaces instead of Rn is that it is often preferable to work in a coordinate-free manner. Once the Euclidean plane has been described in language, it is actually a simple matter to extend its concept to arbitrary dimensions. For the most part, the vocabulary, formulae, and calculations are not made any more difficult by the presence of more dimensions. Intuitively, the distinction says merely that there is no choice of where the origin should go in the space
9.
Function composition
–
In mathematics, function composition is the pointwise application of one function to the result of another to produce a third function. The resulting composite function is denoted g ∘ f, X → Z, the notation g ∘ f is read as g circle f, or g round f, or g composed with f, g after f, g following f, or g of f, or g on f. Intuitively, composing two functions is a process in which the output of the inner function becomes the input of the outer function. The composition of functions is a case of the composition of relations. The composition of functions has some additional properties, Composition of functions on a finite set, If f =, and g =, then g ∘ f =. The composition of functions is always associative—a property inherited from the composition of relations, since there is no distinction between the choices of placement of parentheses, they may be left off without causing any ambiguity. In a strict sense, the composition g ∘ f can be only if fs codomain equals gs domain, in a wider sense it is sufficient that the former is a subset of the latter. The functions g and f are said to commute with each other if g ∘ f = f ∘ g, commutativity is a special property, attained only by particular functions, and often in special circumstances. For example, | x | +3 = | x + 3 | only when x ≥0, the composition of one-to-one functions is always one-to-one. Similarly, the composition of two functions is always onto. It follows that composition of two bijections is also a bijection, the inverse function of a composition has the property that −1 =. Derivatives of compositions involving differentiable functions can be using the chain rule. Higher derivatives of functions are given by Faà di Brunos formula. Suppose one has two functions f, X → X, g, X → X having the domain and codomain. Then one can form chains of transformations composed together, such as f ∘ f ∘ g ∘ f, such chains have the algebraic structure of a monoid, called a transformation monoid or composition monoid. In general, transformation monoids can have remarkably complicated structure, one particular notable example is the de Rham curve. The set of all functions f, X → X is called the transformation semigroup or symmetric semigroup on X. If the transformation are bijective, then the set of all combinations of these functions forms a transformation group
10.
Isometry
–
In mathematics, an isometry is a distance-preserving transformation between metric spaces, usually assumed to be bijective. Isometries are often used in constructions where one space is embedded in another space, for instance, the completion of a metric space M involves an isometry from M into M, a quotient set of the space of Cauchy sequences on M. The original space M is thus isometrically isomorphic to a subspace of a metric space. An isometric surjective linear operator on a Hilbert space is called a unitary operator, let X and Y be metric spaces with metrics dX and dY. A map ƒ, X → Y is called an isometry or distance preserving if for any a, b ∈ X one has d Y = d X. An isometry is automatically injective, otherwise two points, a and b, could be mapped to the same point, thereby contradicting the coincidence axiom of the metric d. This proof is similar to the proof that an order embedding between partially ordered sets is injective, clearly, every isometry between metric spaces is a topological embedding. A global isometry, isometric isomorphism or congruence mapping is a bijective isometry, like any other bijection, a global isometry has a function inverse. The inverse of an isometry is also a global isometry. Two metric spaces X and Y are called if there is a bijective isometry from X to Y. The set of bijective isometries from a space to itself forms a group with respect to function composition. This term is often abridged to simply isometry, so one should care to determine from context which type is intended. Any reflection, translation and rotation is a global isometry on Euclidean spaces, the map x ↦ | x | in R is a path isometry but not an isometry. Note that unlike an isometry, it is not injective, the isometric linear maps from Cn to itself are given by the unitary matrices. Given two normed vector spaces V and W, an isometry is a linear map f, V → W that preserves the norms. Linear isometries are distance-preserving maps in the above sense and they are global isometries if and only if they are surjective. By the Mazur-Ulam theorem, any isometry of normed spaces over R is affine. Note that ε-isometries are not assumed to be continuous, the restricted isometry property characterizes nearly isometric matrices for sparse vectors
11.
Orientation (vector space)
–
In linear algebra, the notion of orientation makes sense in arbitrary finite dimension. In this setting, the orientation of a basis is a kind of asymmetry that makes a reflection impossible to replicate by means of a simple rotation. As a result, in the three-dimensional Euclidean space, the two possible basis orientations are called right-handed and left-handed, the orientation on a real vector space is the arbitrary choice of which ordered bases are positively oriented and which are negatively oriented. In the three-dimensional Euclidean space, right-handed bases are typically declared to be positively oriented, a vector space with an orientation selected is called an oriented vector space, while one not having an orientation selected, is called unoriented. Let V be a real vector space and let b1. It is a result in linear algebra that there exists a unique linear transformation A, V → V that takes b1 to b2. The bases b1 and b2 are said to have the same orientation if A has positive determinant, the property of having the same orientation defines an equivalence relation on the set of all ordered bases for V. If V is non-zero, there are two equivalence classes determined by this relation. An orientation on V is an assignment of +1 to one equivalence class, every ordered basis lives in one equivalence class or another. Thus any choice of an ordered basis for V determines an orientation. For example, the basis on Rn provides a standard orientation on Rn. Any choice of an isomorphism between V and Rn will then provide an orientation on V. The ordering of elements in a basis is crucial, two bases with a different ordering will differ by some permutation. They will have the same/opposite orientations according to whether the signature of this permutation is ±1 and this is because the determinant of a permutation matrix is equal to the signature of the associated permutation. Similarly, let A be a linear mapping of vector space Rn to Rn. This mapping is orientation-preserving if its determinant is positive, a zero-dimensional vector space has only a single point, the zero vector. Consequently, the basis of a zero-dimensional vector space is the empty set ∅. Therefore, there is an equivalence class of ordered bases, namely
12.
Inverse function
–
I. e. f = y if and only if g = x. As a simple example, consider the function of a real variable given by f = 5x −7. Thinking of this as a procedure, to reverse this and get x back from some output value, say y. In this case means that we should add 7 to y. In functional notation this inverse function would be given by, g = y +75, with y = 5x −7 we have that f = y and g = x. Not all functions have inverse functions, in order for a function f, X → Y to have an inverse, it must have the property that for every y in Y there must be one, and only one x in X so that f = y. This property ensures that a function g, Y → X will exist having the necessary relationship with f, let f be a function whose domain is the set X, and whose image is the set Y. Then f is invertible if there exists a g with domain Y and image X, with the property. If f is invertible, the g is unique, which means that there is exactly one function g satisfying this property. That function g is called the inverse of f, and is usually denoted as f −1. Stated otherwise, a function is invertible if and only if its inverse relation is a function on the range Y, not all functions have an inverse. For a function to have an inverse, each element y ∈ Y must correspond to no more than one x ∈ X, a function f with this property is called one-to-one or an injection. If f −1 is to be a function on Y, then each element y ∈ Y must correspond to some x ∈ X. Functions with this property are called surjections. This property is satisfied by definition if Y is the image of f, to be invertible a function must be both an injection and a surjection. If a function f is invertible, then both it and its inverse function f−1 are bijections, there is another convention used in the definition of functions. This can be referred to as the set-theoretic or graph definition using ordered pairs in which a codomain is never referred to, under this convention all functions are surjections, and so, being a bijection simply means being an injection. Authors using this convention may use the phrasing that a function is invertible if, the two conventions need not cause confusion as long as it is remembered that in this alternate convention the codomain of a function is always taken to be the range of the function. With this type of function it is impossible to deduce an input from its output, such a function is called non-injective or, in some applications, information-losing
13.
Identity map
–
In mathematics, an identity function, also called an identity relation or identity map or identity transformation, is a function that always returns the same value that was used as its argument. In equations, the function is given by f = x, formally, if M is a set, the identity function f on M is defined to be that function with domain and codomain M which satisfies f = x for all elements x in M. In other words, the value f in M is always the same input element x of M. The identity function on M is clearly a function as well as a surjective function. The identity function f on M is often denoted by idM, in set theory, where a function is defined as a particular kind of binary relation, the identity function is given by the identity relation, or diagonal of M. If f, M → N is any function, then we have f ∘ idM = f = idN ∘ f, in particular, idM is the identity element of the monoid of all functions from M to M. Since the identity element of a monoid is unique, one can define the identity function on M to be this identity element. Such a definition generalizes to the concept of an identity morphism in category theory, the identity function is a linear operator, when applied to vector spaces. The identity function on the integers is a completely multiplicative function. In an n-dimensional vector space the identity function is represented by the identity matrix In, in a metric space the identity is trivially an isometry. An object without any symmetry has as symmetry group the group only containing this isometry. In a topological space, the identity function is always continuous
14.
Associative property
–
In mathematics, the associative property is a property of some binary operations. In propositional logic, associativity is a rule of replacement for expressions in logical proofs. That is, rearranging the parentheses in such an expression will not change its value, consider the following equations, +4 =2 + =92 × = ×4 =24. Even though the parentheses were rearranged on each line, the values of the expressions were not altered, since this holds true when performing addition and multiplication on any real numbers, it can be said that addition and multiplication of real numbers are associative operations. Associativity is not to be confused with commutativity, which addresses whether or not the order of two operands changes the result. For example, the order doesnt matter in the multiplication of numbers, that is. Associative operations are abundant in mathematics, in fact, many algebraic structures explicitly require their binary operations to be associative, however, many important and interesting operations are non-associative, some examples include subtraction, exponentiation and the vector cross product. Z = x = xyz for all x, y, z in S, the associative law can also be expressed in functional notation thus, f = f. If a binary operation is associative, repeated application of the produces the same result regardless how valid pairs of parenthesis are inserted in the expression. This is called the generalized associative law, thus the product can be written unambiguously as abcd. As the number of elements increases, the number of ways to insert parentheses grows quickly. Some examples of associative operations include the following, the two methods produce the same result, string concatenation is associative. In arithmetic, addition and multiplication of numbers are associative, i. e. + z = x + = x + y + z z = x = x y z } for all x, y, z ∈ R. x, y, z\in \mathbb. }Because of associativity. Addition and multiplication of numbers and quaternions are associative. Addition of octonions is also associative, but multiplication of octonions is non-associative, the greatest common divisor and least common multiple functions act associatively. Gcd = gcd = gcd lcm = lcm = lcm } for all x, y, z ∈ Z. x, y, z\in \mathbb. }Taking the intersection or the union of sets, ∩ C = A ∩ = A ∩ B ∩ C ∪ C = A ∪ = A ∪ B ∪ C } for all sets A, B, C. Slightly more generally, given four sets M, N, P and Q, with h, M to N, g, N to P, in short, composition of maps is always associative. Consider a set with three elements, A, B, and C, thus, for example, A=C = A
15.
Manifold
–
In mathematics, a manifold is a topological space that locally resembles Euclidean space near each point. More precisely, each point of a manifold has a neighbourhood that is homeomorphic to the Euclidean space of dimension n. One-dimensional manifolds include lines and circles, but not figure eights, two-dimensional manifolds are also called surfaces. Although a manifold locally resembles Euclidean space, globally it may not, for example, the surface of the sphere is not a Euclidean space, but in a region it can be charted by means of map projections of the region into the Euclidean plane. When a region appears in two neighbouring charts, the two representations do not coincide exactly and a transformation is needed to pass from one to the other, Manifolds naturally arise as solution sets of systems of equations and as graphs of functions. One important class of manifolds is the class of differentiable manifolds and this differentiable structure allows calculus to be done on manifolds. A Riemannian metric on a manifold allows distances and angles to be measured, symplectic manifolds serve as the phase spaces in the Hamiltonian formalism of classical mechanics, while four-dimensional Lorentzian manifolds model spacetime in general relativity. After a line, the circle is the simplest example of a topological manifold, Topology ignores bending, so a small piece of a circle is treated exactly the same as a small piece of a line. Consider, for instance, the top part of the circle, x2 + y2 =1. Any point of this arc can be described by its x-coordinate. So, projection onto the first coordinate is a continuous, and invertible, mapping from the arc to the open interval. Such functions along with the regions they map are called charts. Similarly, there are charts for the bottom, left, and right parts of the circle, together, these parts cover the whole circle and the four charts form an atlas for the circle. The top and right charts, χtop and χright respectively, overlap in their domain, Each map this part into the interval, though differently. Let a be any number in, then, T = χ r i g h t = χ r i g h t =1 − a 2 Such a function is called a transition map. The top, bottom, left, and right charts show that the circle is a manifold, charts need not be geometric projections, and the number of charts is a matter of some choice. These two charts provide a second atlas for the circle, with t =1 s Each chart omits a single point, either for s or for t and it can be proved that it is not possible to cover the full circle with a single chart. Viewed using calculus, the transition function T is simply a function between open intervals, which gives a meaning to the statement that T is differentiable
16.
Smooth function
–
In mathematical analysis, the smoothness of a function is a property measured by the number of derivatives it has which are continuous. A smooth function is a function that has derivatives of all orders everywhere in its domain, differentiability class is a classification of functions according to the properties of their derivatives. Higher order differentiability classes correspond to the existence of more derivatives, consider an open set on the real line and a function f defined on that set with real values. Let k be a non-negative integer, the function f is said to be of class Ck if the derivatives f′, f′′. The function f is said to be of class C∞, or smooth, if it has derivatives of all orders. The function f is said to be of class Cω, or analytic, if f is smooth, Cω is thus strictly contained in C∞. Bump functions are examples of functions in C∞ but not in Cω, to put it differently, the class C0 consists of all continuous functions. The class C1 consists of all differentiable functions whose derivative is continuous, thus, a C1 function is exactly a function whose derivative exists and is of class C0. In particular, Ck is contained in Ck−1 for every k, C∞, the class of infinitely differentiable functions, is the intersection of the sets Ck as k varies over the non-negative integers. The function f = { x if x ≥0,0 if x <0 is continuous, because cos oscillates as x →0, f ’ is not continuous at zero. Therefore, this function is differentiable but not of class C1, the functions f = | x | k +1 where k is even, are continuous and k times differentiable at all x. But at x =0 they are not times differentiable, so they are of class Ck, the exponential function is analytic, so, of class Cω. The trigonometric functions are also analytic wherever they are defined, the function f is an example of a smooth function with compact support. Let n and m be some positive integers, if f is a function from an open subset of Rn with values in Rm, then f has component functions f1. Each of these may or may not have partial derivatives, the classes C∞ and Cω are defined as before. These criteria of differentiability can be applied to the functions of a differential structure. The resulting space is called a Ck manifold, if one wishes to start with a coordinate-independent definition of the class Ck, one may start by considering maps between Banach spaces. A map from one Banach space to another is differentiable at a point if there is a map which approximates it at that point
17.
Lie group
–
In mathematics, a Lie group /ˈliː/ is a group that is also a differentiable manifold, with the property that the group operations are compatible with the smooth structure. Lie groups are named after Sophus Lie, who laid the foundations of the theory of transformation groups. The term groupes de Lie first appeared in French in 1893 in the thesis of Lie’s student Arthur Tresse, an extension of Galois theory to the case of continuous symmetry groups was one of Lies principal motivations. Lie groups are smooth manifolds and as such can be studied using differential calculus. Lie groups play an role in modern geometry, on several different levels. Felix Klein argued in his Erlangen program that one can consider various geometries by specifying an appropriate transformation group that leaves certain geometric properties invariant and this idea later led to the notion of a G-structure, where G is a Lie group of local symmetries of a manifold. On a global level, whenever a Lie group acts on an object, such as a Riemannian or a symplectic manifold. The presence of continuous symmetries expressed via a Lie group action on a manifold places strong constraints on its geometry, Linear actions of Lie groups are especially important, and are studied in representation theory. This insight opened new possibilities in pure algebra, by providing a uniform construction for most finite simple groups, a real Lie group is a group that is also a finite-dimensional real smooth manifold, in which the group operations of multiplication and inversion are smooth maps. Smoothness of the group multiplication μ, G × G → G μ = x y means that μ is a mapping of the product manifold G×G into G. These two requirements can be combined to the requirement that the mapping ↦ x −1 y be a smooth mapping of the product manifold into G. The 2×2 real invertible matrices form a group under multiplication, denoted by GL or by GL2 and this is a four-dimensional noncompact real Lie group. This group is disconnected, it has two connected components corresponding to the positive and negative values of the determinant, the rotation matrices form a subgroup of GL, denoted by SO. It is a Lie group in its own right, specifically, using the rotation angle φ as a parameter, this group can be parametrized as follows, SO =. Addition of the angles corresponds to multiplication of the elements of SO, thus both multiplication and inversion are differentiable maps. The orthogonal group also forms an example of a Lie group. All of the examples of Lie groups fall within the class of classical groups. Hilberts fifth problem asked whether replacing differentiable manifolds with topological or analytic ones can yield new examples, if the underlying manifold is allowed to be infinite-dimensional, then one arrives at the notion of an infinite-dimensional Lie group
18.
Compact space
–
In mathematics, and more specifically in general topology, compactness is a property that generalizes the notion of a subset of Euclidean space being closed and bounded. Examples include a closed interval, a rectangle, or a set of points. This notion is defined for general topological spaces than Euclidean space in various ways. One such generalization is that a space is compact if any infinite sequence of points sampled from the space must frequently get arbitrarily close to some point of the space. An equivalent definition is that every sequence of points must have an infinite subsequence that converges to some point of the space, the Heine-Borel theorem states that a subset of Euclidean space is compact in this sequential sense if and only if it is closed and bounded. Thus, if one chooses a number of points in the closed unit interval some of those points must get arbitrarily close to some real number in that space. For instance, some of the numbers 1/2, 4/5, 1/3, 5/6, 1/4, 6/7, the same set of points would not accumulate to any point of the open unit interval, so the open unit interval is not compact. Euclidean space itself is not compact since it is not bounded, in particular, the sequence of points 0, 1, 2, 3, … has no subsequence that converges to any given real number. Apart from closed and bounded subsets of Euclidean space, typical examples of compact spaces include spaces consisting not of geometrical points, the term compact was introduced into mathematics by Maurice Fréchet in 1904 as a distillation of this concept. Various equivalent notions of compactness, including sequential compactness and limit point compactness, in general topological spaces, however, different notions of compactness are not necessarily equivalent. This more subtle notion, introduced by Pavel Alexandrov and Pavel Urysohn in 1929, the term compact set is sometimes a synonym for compact space, but usually refers to a compact subspace of a topological space. In the 19th century, several disparate mathematical properties were understood that would later be seen as consequences of compactness. On the one hand, Bernard Bolzano had been aware that any bounded sequence of points has a subsequence that must eventually get close to some other point. The process could then be repeated by dividing the resulting smaller interval into smaller and smaller parts until it closes down on the limit point. The full significance of Bolzanos theorem, and its method of proof, in the 1880s, it became clear that results similar to the Bolzano–Weierstrass theorem could be formulated for spaces of functions rather than just numbers or geometrical points. The idea of regarding functions as points of a generalized space dates back to the investigations of Giulio Ascoli. The uniform limit of this sequence then played precisely the same role as Bolzanos limit point and this ultimately led to the notion of a compact operator as an offshoot of the general notion of a compact space. It was Maurice Fréchet who, in 1906, had distilled the essence of the Bolzano–Weierstrass property, in 1870, Eduard Heine showed that a continuous function defined on a closed and bounded interval was in fact uniformly continuous
19.
Linear transformation
–
In mathematics, a linear map is a mapping V → W between two modules that preserves the operations of addition and scalar multiplication. An important special case is when V = W, in case the map is called a linear operator, or an endomorphism of V. Sometimes the term linear function has the meaning as linear map. A linear map always maps linear subspaces onto linear subspaces, for instance it maps a plane through the origin to a plane, Linear maps can often be represented as matrices, and simple examples include rotation and reflection linear transformations. In the language of algebra, a linear map is a module homomorphism. In the language of category theory it is a morphism in the category of modules over a given ring, let V and W be vector spaces over the same field K. e. that for any vectors x1. Am ∈ K, the equality holds, f = a 1 f + ⋯ + a m f. It is then necessary to specify which of these fields is being used in the definition of linear. If V and W are considered as spaces over the field K as above, for example, the conjugation of complex numbers is an R-linear map C → C, but it is not C-linear. A linear map from V to K is called a linear functional and these statements generalize to any left-module RM over a ring R without modification, and to any right-module upon reversing of the scalar multiplication. The zero map between two left-modules over the ring is always linear. The identity map on any module is a linear operator, any homothecy centered in the origin of a vector space, v ↦ c v where c is a scalar, is a linear operator. This does not hold in general for modules, where such a map might only be semilinear, for real numbers, the map x ↦ x2 is not linear. Conversely, any map between finite-dimensional vector spaces can be represented in this manner, see the following section. Differentiation defines a map from the space of all differentiable functions to the space of all functions. It also defines an operator on the space of all smooth functions. If V and W are finite-dimensional vector spaces over a field F, then functions that send linear maps f, V → W to dimF × dimF matrices in the way described in the sequel are themselves linear maps. The expected value of a variable is linear, as for random variables X and Y we have E = E + E and E = aE
20.
Matrix (mathematics)
–
In mathematics, a matrix is a rectangular array of numbers, symbols, or expressions, arranged in rows and columns. For example, the dimensions of the matrix below are 2 ×3, the individual items in an m × n matrix A, often denoted by ai, j, where max i = m and max j = n, are called its elements or entries. Provided that they have the size, two matrices can be added or subtracted element by element. The rule for multiplication, however, is that two matrices can be multiplied only when the number of columns in the first equals the number of rows in the second. Any matrix can be multiplied element-wise by a scalar from its associated field, a major application of matrices is to represent linear transformations, that is, generalizations of linear functions such as f = 4x. The product of two matrices is a matrix that represents the composition of two linear transformations. Another application of matrices is in the solution of systems of linear equations, if the matrix is square, it is possible to deduce some of its properties by computing its determinant. For example, a matrix has an inverse if and only if its determinant is not zero. Insight into the geometry of a transformation is obtainable from the matrixs eigenvalues. Applications of matrices are found in most scientific fields, in computer graphics, they are used to manipulate 3D models and project them onto a 2-dimensional screen. Matrix calculus generalizes classical analytical notions such as derivatives and exponentials to higher dimensions, Matrices are used in economics to describe systems of economic relationships. A major branch of analysis is devoted to the development of efficient algorithms for matrix computations. Matrix decomposition methods simplify computations, both theoretically and practically, algorithms that are tailored to particular matrix structures, such as sparse matrices and near-diagonal matrices, expedite computations in finite element method and other computations. Infinite matrices occur in planetary theory and in atomic theory, a simple example of an infinite matrix is the matrix representing the derivative operator, which acts on the Taylor series of a function. A matrix is an array of numbers or other mathematical objects for which operations such as addition and multiplication are defined. Most commonly, a matrix over a field F is an array of scalars each of which is a member of F. Most of this focuses on real and complex matrices, that is, matrices whose elements are real numbers or complex numbers. More general types of entries are discussed below, for instance, this is a real matrix, A =
21.
Basis of a vector space
–
In more general terms, a basis is a linearly independent spanning set. Given a basis of a vector space V, every element of V can be expressed uniquely as a combination of basis vectors. A vector space can have distinct sets of basis vectors, however each such set has the same number of elements. A basis B of a vector space V over a field F is an independent subset of V that spans V. In more detail, suppose that B = is a subset of a vector space V over a field F. The numbers ai are called the coordinates of the vector x with respect to the basis B, a vector space that has a finite basis is called finite-dimensional. To deal with infinite-dimensional spaces, we must generalize the definition to include infinite basis sets. The sums in the definition are all finite because without additional structure the axioms of a vector space do not permit us to meaningfully speak about an infinite sum of vectors. Settings that permit infinite linear combinations allow alternative definitions of the basis concept and it is often convenient to list the basis vectors in a specific order, for example, when considering the transformation matrix of a linear map with respect to a basis. We then speak of a basis, which we define to be a sequence of linearly independent vectors that span V. B is a set of linearly independent vectors, i. e. it is a linearly independent set. Every vector in V can be expressed as a combination of vectors in B in a unique way. If the basis is ordered then the coefficients in this linear combination provide coordinates of the relative to the basis. Every vector space has a basis, the proof of this requires the axiom of choice. All bases of a vector space have the same cardinality, called the dimension of the vector space and this result is known as the dimension theorem, and requires the ultrafilter lemma, a strictly weaker form of the axiom of choice. Also many vector sets can be attributed a standard basis which comprises both spanning and linearly independent vectors, standard bases for example, In Rn, where ei is the ith column of the identity matrix. In P2, where P2 is the set of all polynomials of degree at most 2, is the standard basis. In M22, where M22 is the set of all 2×2 matrices. and Mm, n is the 2×2 matrix with a 1 in the m, n position, given a vector space V over a field F and suppose that and are two bases for V
22.
Determinant
–
In linear algebra, the determinant is a useful value that can be computed from the elements of a square matrix. The determinant of a matrix A is denoted det, detA and it can be viewed as the scaling factor of the transformation described by the matrix. In the case of a 2 ×2 matrix, the formula for the determinant. Each determinant of a 2 ×2 matrix in this equation is called a minor of the matrix A, the same sort of procedure can be used to find the determinant of a 4 ×4 matrix, the determinant of a 5 ×5 matrix, and so forth. The use of determinants in calculus includes the Jacobian determinant in the change of rule for integrals of functions of several variables. Determinants are also used to define the characteristic polynomial of a matrix, in analytical geometry, determinants express the signed n-dimensional volumes of n-dimensional parallelepipeds. Sometimes, determinants are used merely as a notation for expressions that would otherwise be unwieldy to write down. When the entries of the matrix are taken from a field, it can be proven that any matrix has an inverse if. There are various equivalent ways to define the determinant of a square matrix A, i. e. one with the number of rows. Another way to define the determinant is expressed in terms of the columns of the matrix and these properties mean that the determinant is an alternating multilinear function of the columns that maps the identity matrix to the underlying unit scalar. These suffice to uniquely calculate the determinant of any square matrix, provided the underlying scalars form a field, the definition below shows that such a function exists, and it can be shown to be unique. Assume A is a matrix with n rows and n columns. The entries can be numbers or expressions, the definition of the determinant depends only on the fact that they can be added and multiplied together in a commutative manner. The determinant of a 2 ×2 matrix is defined by | a b c d | = a d − b c. If the matrix entries are numbers, the matrix A can be used to represent two linear maps, one that maps the standard basis vectors to the rows of A. In either case, the images of the vectors form a parallelogram that represents the image of the unit square under the mapping. The parallelogram defined by the rows of the matrix is the one with vertices at. The absolute value of ad − bc is the area of the parallelogram, the absolute value of the determinant together with the sign becomes the oriented area of the parallelogram
23.
Matrix multiplication
–
In mathematics, matrix multiplication or the matrix product is a binary operation that produces a matrix from two matrices. The definition is motivated by linear equations and linear transformations on vectors, which have applications in applied mathematics, physics. When two linear transformations are represented by matrices, then the matrix represents the composition of the two transformations. The matrix product is not commutative in general, although it is associative and is distributive over matrix addition, the identity element of the matrix product is the identity matrix, and a square matrix may have an inverse matrix. Determinant multiplicativity applies to the matrix product, the matrix product is also important for matrix groups, and the theory of group representations and irreps. Computing matrix products is both an operation in many numerical algorithms and potentially time consuming, making it one of the most well-studied problems in numerical computing. Various algorithms have been devised for computing C = AB, especially for large matrices, index notation is often the clearest way to express definitions, and is used as standard in the literature. The i, j entry of matrix A is indicated by ij or Aij, whereas a numerical label on a collection of matrices is subscripted only, e. g. A1, A2, assume two matrices are to be multiplied. M, and summing the results over k, i j = ∑ k =1 m A i k B k j. Thus the product AB is defined if the number of columns in A is equal to the number of rows in B. Each entry may be computed one at a time, sometimes, the summation convention is used as it is understood to sum over the repeated index k. To prevent any ambiguity, this convention will not be used in the article, usually the entries are numbers or expressions, but can even be matrices themselves. The matrix product can still be calculated exactly the same way, see below for details on how the matrix product can be calculated in terms of blocks taking the forms of rows and columns. The figure to the right illustrates diagrammatically the product of two matrices A and B, showing how each intersection in the matrix corresponds to a row of A. Note AB and BA are two different matrices, the first is a 1 ×1 matrix while the second is a 3 ×3 matrix, if A =, B =, their matrix product is, A B = =, however BA is not defined. The product of a square matrix multiplied by a column matrix arises naturally in algebra, for solving linear equations. By choosing a, b, c, p, q, r, u, v, w in A appropriately, A can represent a variety of such as rotations, scaling and reflections, shears. If A =, B =, their products are, A B = =
24.
Elementary particle
–
In particle physics, an elementary particle or fundamental particle is a particle whose substructure is unknown, thus, it is unknown whether it is composed of other particles. A particle containing two or more elementary particles is a composite particle, soon, subatomic constituents of the atom were identified. As the 1930s opened, the electron and the proton had been observed, along with the photon, via quantum theory, protons and neutrons were found to contain quarks—up quarks and down quarks—now considered elementary particles. And within a molecule, the three degrees of freedom can separate via wavefunction into three quasiparticles. Yet a free electron—which, not orbiting a nucleus, lacks orbital motion—appears unsplittable. Meanwhile, an elementary boson mediating gravitation—the graviton—remains hypothetical, all elementary particles are—depending on their spin—either bosons or fermions. These are differentiated via the theorem of quantum statistics. Particles of half-integer spin exhibit Fermi–Dirac statistics and are fermions, Particles of integer spin, in other words full-integer, exhibit Bose–Einstein statistics and are bosons. In the Standard Model, elementary particles are represented for predictive utility as point particles, though extremely successful, the Standard Model is limited to the microcosm by its omission of gravitation and has some parameters arbitrarily added but unexplained. According to the current models of big bang nucleosynthesis, the composition of visible matter of the universe should be about 75% hydrogen. Neutrons are made up of one up and two down quark, while protons are made of two up and one down quark. Since the other elementary particles are so light or so rare when compared to atomic nuclei. Therefore, one can conclude that most of the mass of the universe consists of protons and neutrons. Some estimates imply that there are roughly 1080 baryons in the observable universe, the number of protons in the observable universe is called the Eddington number. Other estimates imply that roughly 1097 elementary particles exist in the universe, mostly photons, gravitons. However, the Standard Model is widely considered to be a theory rather than a truly fundamental one. The 12 fundamental fermionic flavours are divided into three generations of four particles each, six of the particles are quarks. The remaining six are leptons, three of which are neutrinos, and the three of which have an electric charge of −1, the electron and its two cousins, the muon and the tau
25.
Spin (physics)
–
In quantum mechanics and particle physics, spin is an intrinsic form of angular momentum carried by elementary particles, composite particles, and atomic nuclei. Spin is one of two types of angular momentum in mechanics, the other being orbital angular momentum. In some ways, spin is like a vector quantity, it has a definite magnitude, all elementary particles of a given kind have the same magnitude of spin angular momentum, which is indicated by assigning the particle a spin quantum number. The SI unit of spin is the or, just as with classical angular momentum, very often, the spin quantum number is simply called spin leaving its meaning as the unitless spin quantum number to be inferred from context. When combined with the theorem, the spin of electrons results in the Pauli exclusion principle. Wolfgang Pauli was the first to propose the concept of spin, in 1925, Ralph Kronig, George Uhlenbeck and Samuel Goudsmit at Leiden University suggested an physical interpretation of particles spinning around their own axis. The mathematical theory was worked out in depth by Pauli in 1927, when Paul Dirac derived his relativistic quantum mechanics in 1928, electron spin was an essential part of it. As the name suggests, spin was originally conceived as the rotation of a particle around some axis and this picture is correct so far as spin obeys the same mathematical laws as quantized angular momenta do. On the other hand, spin has some properties that distinguish it from orbital angular momenta. Although the direction of its spin can be changed, a particle cannot be made to spin faster or slower. The spin of a particle is associated with a magnetic dipole moment with a g-factor differing from 1. This could only occur if the internal charge of the particle were distributed differently from its mass. The conventional definition of the quantum number, s, is s = n/2. Hence the allowed values of s are 0, 1/2,1, 3/2,2, the value of s for an elementary particle depends only on the type of particle, and cannot be altered in any known way. The spin angular momentum, S, of any system is quantized. The allowed values of S are S = ℏ s = h 4 π n, in contrast, orbital angular momentum can only take on integer values of s, i. e. even-numbered values of n. Those particles with half-integer spins, such as 1/2, 3/2, 5/2, are known as fermions, while particles with integer spins. The two families of particles obey different rules and broadly have different roles in the world around us, a key distinction between the two families is that fermions obey the Pauli exclusion principle, that is, there cannot be two identical fermions simultaneously having the same quantum numbers
26.
Angle
–
In planar geometry, an angle is the figure formed by two rays, called the sides of the angle, sharing a common endpoint, called the vertex of the angle. Angles formed by two rays lie in a plane, but this plane does not have to be a Euclidean plane, Angles are also formed by the intersection of two planes in Euclidean and other spaces. Angles formed by the intersection of two curves in a plane are defined as the angle determined by the tangent rays at the point of intersection. Similar statements hold in space, for example, the angle formed by two great circles on a sphere is the dihedral angle between the planes determined by the great circles. Angle is also used to designate the measure of an angle or of a rotation and this measure is the ratio of the length of a circular arc to its radius. In the case of an angle, the arc is centered at the vertex. In the case of a rotation, the arc is centered at the center of the rotation and delimited by any other point and its image by the rotation. The word angle comes from the Latin word angulus, meaning corner, cognate words are the Greek ἀγκύλος, meaning crooked, curved, both are connected with the Proto-Indo-European root *ank-, meaning to bend or bow. Euclid defines a plane angle as the inclination to each other, in a plane, according to Proclus an angle must be either a quality or a quantity, or a relationship. In mathematical expressions, it is common to use Greek letters to serve as variables standing for the size of some angle, lower case Roman letters are also used, as are upper case Roman letters in the context of polygons. See the figures in this article for examples, in geometric figures, angles may also be identified by the labels attached to the three points that define them. For example, the angle at vertex A enclosed by the rays AB, sometimes, where there is no risk of confusion, the angle may be referred to simply by its vertex. However, in geometrical situations it is obvious from context that the positive angle less than or equal to 180 degrees is meant. Otherwise, a convention may be adopted so that ∠BAC always refers to the angle from B to C. Angles smaller than an angle are called acute angles. An angle equal to 1/4 turn is called a right angle, two lines that form a right angle are said to be normal, orthogonal, or perpendicular. Angles larger than an angle and smaller than a straight angle are called obtuse angles. An angle equal to 1/2 turn is called a straight angle, Angles larger than a straight angle but less than 1 turn are called reflex angles
27.
Dot product
–
In mathematics, the dot product or scalar product is an algebraic operation that takes two equal-length sequences of numbers and returns a single number. Sometimes it is called inner product in the context of Euclidean space, algebraically, the dot product is the sum of the products of the corresponding entries of the two sequences of numbers. Geometrically, it is the product of the Euclidean magnitudes of the two vectors and the cosine of the angle between them, the dot product may be defined algebraically or geometrically. The geometric definition is based on the notions of angle and distance, the equivalence of these two definitions relies on having a Cartesian coordinate system for Euclidean space. In such a presentation, the notions of length and angles are not primitive, so the equivalence of the two definitions of the dot product is a part of the equivalence of the classical and the modern formulations of Euclidean geometry. For instance, in space, the dot product of vectors and is. In Euclidean space, a Euclidean vector is an object that possesses both a magnitude and a direction. A vector can be pictured as an arrow and its magnitude is its length, and its direction is the direction that the arrow points. The magnitude of a vector a is denoted by ∥ a ∥, the dot product of two Euclidean vectors a and b is defined by a ⋅ b = ∥ a ∥ ∥ b ∥ cos , where θ is the angle between a and b. In particular, if a and b are orthogonal, then the angle between them is 90° and a ⋅ b =0. The scalar projection of a Euclidean vector a in the direction of a Euclidean vector b is given by a b = ∥ a ∥ cos θ, where θ is the angle between a and b. In terms of the definition of the dot product, this can be rewritten a b = a ⋅ b ^. The dot product is thus characterized geometrically by a ⋅ b = a b ∥ b ∥ = b a ∥ a ∥. The dot product, defined in this manner, is homogeneous under scaling in each variable and it also satisfies a distributive law, meaning that a ⋅ = a ⋅ b + a ⋅ c. These properties may be summarized by saying that the dot product is a bilinear form, moreover, this bilinear form is positive definite, which means that a ⋅ a is never negative and is zero if and only if a =0. En are the basis vectors in Rn, then we may write a = = ∑ i a i e i b = = ∑ i b i e i. The vectors ei are a basis, which means that they have unit length and are at right angles to each other. Hence since these vectors have unit length e i ⋅ e i =1 and since they form right angles with each other, thus in general we can say that, e i ⋅ e j = δ i j
28.
Classical group
–
Of these, the complex classical Lie groups are four infinite families of Lie groups that together with the exceptional groups exhaust the classification of simple Lie groups. The compact classical groups are compact real forms of the classical groups. The finite analogues of the groups are the classical groups of Lie type. The term classical group was coined by Hermann Weyl, it being the title of his 1939 monograph The Classical Groups, the classical groups form the deepest and most useful part of the subject of linear Lie groups. Most types of classical groups find application in classical and modern physics, a few examples are the following. The rotation group SO is a symmetry of Euclidean space and all laws of physics. The special unitary group SU is the group of quantum chromodynamics. The classical groups are exactly the general linear groups over R, C and H together with the groups of non-degenerate forms discussed below. These groups are usually restricted to the subgroups whose elements have determinant 1. The classical groups, with the determinant 1 condition, are listed in the table below, in the sequel, the determinant 1 condition is not used consistently in the interest of greater generality. The complex classical groups are SL, SO and Sp, a group is complex according to whether its Lie algebra is complex. The real classical groups refers to all of the classical groups since any Lie algebra is a real algebra, the compact classical groups are the compact real forms of the complex classical groups. These are, in turn, SU, SO and Sp, one characterization of the compact real form is in terms of the Lie algebra g. If g = u + iu, the complexification of u, then if the connected group K generated by exp, X ∈ u is a compact, the classical groups can uniformly be characterized in a different way using real forms. The classical groups are the following, The complex linear algebraic groups SL, SO, for instance, SO∗ is a real form of SO, SU is a real form of Sl, and Sl is a real form of SO. Without the determinant 1 condition, replace the special linear groups with the general linear groups in the characterization. The algebraic groups in question are Lie groups, but the algebraic qualifier is needed to get the notion of real form. The classical groups are defined in terms of forms defined on Rn, Cn, and Hn, the quaternions, H, do not constitute a field because multiplication does not commute, they form a division ring or a skew field or non-commutative field
29.
Rotation matrix
–
In linear algebra, a rotation matrix is a matrix that is used to perform a rotation in Euclidean space. For example the matrix R = rotates points in the xy-Cartesian plane counter-clockwise through an angle θ about the origin of the Cartesian coordinate system. To perform the rotation using a rotation matrix R, the position of each point must be represented by a vector v. A rotated vector is obtained by using the matrix multiplication Rv, Rotation matrices also provide a means of numerically representing an arbitrary rotation of the axes about the origin, without appealing to angular specification. These coordinate rotations are a way to express the orientation of a camera, or the attitude of a spacecraft. The examples in this article apply to active rotations of vectors counter-clockwise in a coordinate system by pre-multiplication. If any one of these is changed, then the inverse of the matrix should be used. Since matrix multiplication has no effect on the vector, rotation matrices can only be used to describe rotations about the origin of the coordinate system. Rotation matrices provide a description of such rotations, and are used extensively for computations in geometry, physics. Rotation matrices are matrices, with real entries. More specifically, they can be characterized as orthogonal matrices with determinant 1, that is, in some literature, the term rotation is generalized to include improper rotations, characterized by orthogonal matrices with determinant −1. These combine proper rotations with reflections, in other cases, where reflections are not being considered, the label proper may be dropped. This convention is followed in this article, the set of all orthogonal matrices of size n with determinant +1 forms a group known as the special orthogonal group SO. The most important special case is that of the rotation group SO, the set of all orthogonal matrices of size n with determinant +1 or -1 forms the orthogonal group O. In two dimensions, every rotation matrix has the form, R =. This rotates column vectors by means of the matrix multiplication. So the coordinates of the point after rotation are x ′ = x cos θ − y sin θ, y ′ = x sin θ + y cos θ. The direction of rotation is counterclockwise if θ is positive
30.
Standard basis
–
In mathematics, the standard basis for a Euclidean space is the set of unit vectors pointing in the direction of the axes of a Cartesian coordinate system. For example, the basis for the Euclidean plane is formed by vectors e x =, e y =. Here the vector ex points in the x direction, the vector ey points in the y direction, there are several common notations for these vectors, including, and. These vectors are written with a hat to emphasize their status as unit vectors. Each of these vectors is sometimes referred to as the versor of the corresponding Cartesian axis and these vectors are a basis in the sense that any other vector can be expressed uniquely as a linear combination of these. For example, every vector v in three-dimensional space can be written uniquely as v x e x + v y e y + v z e z, the scalars vx, vy, vz being the scalar components of the vector v. In n -dimensional Euclidean space, the standard consists of n distinct vectors. Standard bases can be defined for vector spaces, such as polynomials. In both cases, the standard consists of the elements of the vector space such that all coefficients but one are 0. For polynomials, the standard basis consists of the monomials and is commonly called monomial basis. For matrices M m × n, the standard consists of the m×n-matrices with exactly one non-zero entry. For example, the basis for 2×2 matrices is formed by the 4 matrices e 11 =, e 12 =, e 21 =, e 22 =. By definition, the basis is a sequence of orthogonal unit vectors. In other words, it is an ordered and orthonormal basis, however, an ordered orthonormal basis is not necessarily a standard basis. For instance the two vectors representing a 30° rotation of the 2D standard basis described above, i. e, there is a standard basis also for the ring of polynomials in n indeterminates over a field, namely the monomials. This family is the basis of the R-module R of all families f = from I into a ring R, which are zero except for a finite number of indices, if we interpret 1 as 1R. The existence of standard bases has become a topic of interest in algebraic geometry. It is now a part of theory called standard monomial theory
31.
Subgroup
–
In group theory, a branch of mathematics, given a group G under a binary operation ∗, a subset H of G is called a subgroup of G if H also forms a group under the operation ∗. More precisely, H is a subgroup of G if the restriction of ∗ to H × H is an operation on H. This is usually denoted H ≤ G, read as H is a subgroup of G, the trivial subgroup of any group is the subgroup consisting of just the identity element. A proper subgroup of a group G is a subgroup H which is a subset of G. This is usually represented notationally by H < G, read as H is a subgroup of G. Some authors also exclude the group from being proper. If H is a subgroup of G, then G is sometimes called an overgroup of H, the same definitions apply more generally when G is an arbitrary semigroup, but this article will only deal with subgroups of groups. The group G is sometimes denoted by the pair, usually to emphasize the operation ∗ when G carries multiple algebraic or other structures. This article will write ab for a ∗ b, as is usual, a subset H of the group G is a subgroup of G if and only if it is nonempty and closed under products and inverses. In the case that H is finite, then H is a subgroup if and only if H is closed under products. The above condition can be stated in terms of a homomorphism, the identity of a subgroup is the identity of the group, if G is a group with identity eG, and H is a subgroup of G with identity eH, then eH = eG. The intersection of subgroups A and B is again a subgroup. The union of subgroups A and B is a if and only if either A or B contains the other, since for example 2 and 3 are in the union of 2Z and 3Z. Another example is the union of the x-axis and the y-axis in the plane, each of these objects is a subgroup and this also serves as an example of two subgroups, whose intersection is precisely the identity. An element of G is in <S> if and only if it is a product of elements of S. Every element a of a group G generates the cyclic subgroup <a>, if <a> is isomorphic to Z/nZ for some positive integer n, then n is the smallest positive integer for which an = e, and n is called the order of a. If <a> is isomorphic to Z, then a is said to have infinite order, the subgroups of any given group form a complete lattice under inclusion, called the lattice of subgroups. If e is the identity of G, then the group is the minimum subgroup of G
32.
Orthogonal group
–
Equivalently, it is the group of n×n orthogonal matrices, where the group operation is given by matrix multiplication, an orthogonal matrix is a real matrix whose inverse equals its transpose. An important subgroup of O is the orthogonal group, denoted SO. This group is called the rotation group, because, in dimensions 2 and 3. In low dimension, these groups have been studied, see SO, SO and SO. This is a subgroup of the linear group GL given by O = where QT is the transpose of Q and I is the identity matrix. This article mainly discusses the groups of quadratic forms that may be expressed over some bases as the dot product, over the reals. Over the reals, for any quadratic form, there is a basis. Thus the orthogonal group depends only on the numbers of 1 and of −1, and is denoted O, for details, see indefinite orthogonal group. The derived subgroup Ω of O is an often studied object because, the Cartan–Dieudonné theorem describes the structure of the orthogonal group for a non-singular form. The determinant of any orthogonal matrix is either 1 or −1, the orthogonal n-by-n matrices with determinant 1 form a normal subgroup of O known as the special orthogonal group SO, consisting of all proper rotations. By analogy with GL–SL, the group is sometimes called the general orthogonal group and denoted GO. The term rotation group can be used to either the special or general orthogonal group. When this distinction is to be emphasized, the groups may be denoted O and O, reserving n for the dimension of the space. The letters p or r are also used, indicating the rank of the corresponding Lie algebra, in odd dimension the corresponding Lie algebra is s o, while in even dimension the Lie algebra is s o. In two dimensions, O is the group of all rotations about the origin and all reflections along a line through the origin, SO is the group of all rotations about the origin. These groups are related, SO is a subgroup of O of index 2. More generally, in any number of dimensions an even number of reflections gives a rotation, therefore, the rotations define a subgroup of O, but the reflections do not define a subgroup. A reflection through the origin may be generated as a combination of one reflection along each of the axes, the reflection through the origin is not a reflection in the usual sense in even dimensions, but rather a rotation
33.
Isomorphic
–
In mathematics, an isomorphism is a homomorphism or morphism that admits an inverse. Two mathematical objects are isomorphic if an isomorphism exists between them, an automorphism is an isomorphism whose source and target coincide. For most algebraic structures, including groups and rings, a homomorphism is an isomorphism if, in topology, where the morphisms are continuous functions, isomorphisms are also called homeomorphisms or bicontinuous functions. In mathematical analysis, where the morphisms are functions, isomorphisms are also called diffeomorphisms. A canonical isomorphism is a map that is an isomorphism. Two objects are said to be isomorphic if there is a canonical isomorphism between them. Isomorphisms are formalized using category theory, let R + be the multiplicative group of positive real numbers, and let R be the additive group of real numbers. The logarithm function log, R + → R satisfies log = log x + log y for all x, y ∈ R +, so it is a group homomorphism. The exponential function exp, R → R + satisfies exp = for all x, y ∈ R, the identities log exp x = x and exp log y = y show that log and exp are inverses of each other. Since log is a homomorphism that has an inverse that is also a homomorphism, because log is an isomorphism, it translates multiplication of positive real numbers into addition of real numbers. This facility makes it possible to real numbers using a ruler. Consider the group, the integers from 0 to 5 with addition modulo 6 and these structures are isomorphic under addition, if you identify them using the following scheme, ↦0 ↦1 ↦2 ↦3 ↦4 ↦5 or in general ↦ mod 6. For example, + =, which translates in the system as 1 +3 =4. Even though these two groups look different in that the sets contain different elements, they are indeed isomorphic, more generally, the direct product of two cyclic groups Z m and Z n is isomorphic to if and only if m and n are coprime. For example, R is an ordering ≤ and S an ordering ⊑, such an isomorphism is called an order isomorphism or an isotone isomorphism. If X = Y, then this is a relation-preserving automorphism, in a concrete category, such as the category of topological spaces or categories of algebraic objects like groups, rings, and modules, an isomorphism must be bijective on the underlying sets. In algebraic categories, an isomorphism is the same as a homomorphism which is bijective on underlying sets, in abstract algebra, two basic isomorphisms are defined, Group isomorphism, an isomorphism between groups Ring isomorphism, an isomorphism between rings. Just as the automorphisms of an algebraic structure form a group, letting a particular isomorphism identify the two structures turns this heap into a group
34.
Matrix product
–
In mathematics, matrix multiplication or the matrix product is a binary operation that produces a matrix from two matrices. The definition is motivated by linear equations and linear transformations on vectors, which have applications in applied mathematics, physics. When two linear transformations are represented by matrices, then the matrix represents the composition of the two transformations. The matrix product is not commutative in general, although it is associative and is distributive over matrix addition, the identity element of the matrix product is the identity matrix, and a square matrix may have an inverse matrix. Determinant multiplicativity applies to the matrix product, the matrix product is also important for matrix groups, and the theory of group representations and irreps. Computing matrix products is both an operation in many numerical algorithms and potentially time consuming, making it one of the most well-studied problems in numerical computing. Various algorithms have been devised for computing C = AB, especially for large matrices, index notation is often the clearest way to express definitions, and is used as standard in the literature. The i, j entry of matrix A is indicated by ij or Aij, whereas a numerical label on a collection of matrices is subscripted only, e. g. A1, A2, assume two matrices are to be multiplied. M, and summing the results over k, i j = ∑ k =1 m A i k B k j. Thus the product AB is defined if the number of columns in A is equal to the number of rows in B. Each entry may be computed one at a time, sometimes, the summation convention is used as it is understood to sum over the repeated index k. To prevent any ambiguity, this convention will not be used in the article, usually the entries are numbers or expressions, but can even be matrices themselves. The matrix product can still be calculated exactly the same way, see below for details on how the matrix product can be calculated in terms of blocks taking the forms of rows and columns. The figure to the right illustrates diagrammatically the product of two matrices A and B, showing how each intersection in the matrix corresponds to a row of A. Note AB and BA are two different matrices, the first is a 1 ×1 matrix while the second is a 3 ×3 matrix, if A =, B =, their matrix product is, A B = =, however BA is not defined. The product of a square matrix multiplied by a column matrix arises naturally in algebra, for solving linear equations. By choosing a, b, c, p, q, r, u, v, w in A appropriately, A can represent a variety of such as rotations, scaling and reflections, shears. If A =, B =, their products are, A B = =
35.
General linear group
–
In mathematics, the general linear group of degree n is the set of n×n invertible matrices, together with the operation of ordinary matrix multiplication. This forms a group, because the product of two matrices is again invertible, and the inverse of an invertible matrix is invertible. To be more precise, it is necessary to specify what kind of objects may appear in the entries of the matrix, for example, the general linear group over R is the group of n×n invertible matrices of real numbers, and is denoted by GLn or GL. More generally, the linear group of degree n over any field F, or a ring R, is the set of n×n invertible matrices with entries from F. Typical notation is GLn or GL, or simply GL if the field is understood, more generally still, the general linear group of a vector space GL is the abstract automorphism group, not necessarily written as matrices. The special linear group, written SL or SLn, is the subgroup of GL consisting of matrices with a determinant of 1, the group GL and its subgroups are often called linear groups or matrix groups. These groups are important in the theory of representations, and also arise in the study of spatial symmetries and symmetries of vector spaces in general. The modular group may be realised as a quotient of the linear group SL. If n ≥2, then the group GL is not abelian, if V has finite dimension n, then GL and GL are isomorphic. The isomorphism is not canonical, it depends on a choice of basis in V, in a similar way, for a commutative ring R the group GL may be interpreted as the group of automorphisms of a free R-module M of rank n. One can also define GL for any R-module, but in general this is not isomorphic to GL, over a field F, a matrix is invertible if and only if its determinant is nonzero. Therefore, a definition of GL is as the group of matrices with nonzero determinant. Over a non-commutative ring R, determinants are not at all well behaved, in this case, GL may be defined as the unit group of the matrix ring M. The general linear group GL over the field of numbers is a real Lie group of dimension n2. To see this, note that the set of all n×n real matrices, Mn, the subset GL consists of those matrices whose determinant is non-zero. The determinant is a map, and hence GL is an open affine subvariety of Mn. The Lie algebra of GL, denoted g l n, consists of all n×n real matrices with the serving as the Lie bracket. As a manifold, GL is not connected but rather has two connected components, the matrices with positive determinant and the ones with negative determinant, the identity component, denoted by GL+, consists of the real n×n matrices with positive determinant
36.
Real coordinate space
–
In mathematics, real coordinate space of n dimensions, written Rn is a coordinate space that allows several real variables to be treated as a single variable. With various numbers of dimensions, Rn is used in areas of pure and applied mathematics. With component-wise addition and scalar multiplication, it is the real vector space and is a frequently used representation of Euclidean n-space. Due to the fact, geometric metaphors are widely used for Rn, namely a plane for R2. For any natural n, the set Rn consists of all n-tuples of real numbers. It is called n-dimensional real space, for each n there exists only one Rn, the real n-space. Purely mathematical uses of Rn can be classified as follows. First, linear algebra studies its own properties under vector addition and linear transformations, the third use parametrizes geometric points with elements of Rn, it is common in analytic, differential and algebraic geometries. Rn, together with structures on it, is also extensively used in mathematical physics, dynamical systems theory, mathematical statistics. In applied mathematics, numerical analysis, and so on, arrays, sequences, Any function f of n real variables can be considered as a function on Rn. The use of the real n-space, instead of several variables considered separately, can simplify notation, consider, for n =2, a function composition of the following form, F = f, where functions g1 and g2 are continuous. If ∀x1 ∈ R , f is continuous ∀x2 ∈ R , f is continuous then F is not necessarily continuous, continuity is a stronger condition, the continuity of f in the natural R2 topology, also called multivariable continuity, which is sufficient for continuity of the composition F. The coordinate space Rn forms a vector space over the field of real numbers with the addition of the structure of linearity. The operations on Rn as a space are typically defined by x + y = α x =. The zero vector is given by 0 = and the inverse of the vector x is given by − x =. This structure is important because any n-dimensional real vector space is isomorphic to the vector space Rn, in standard matrix notation, each element of Rn is typically written as a column vector x = and sometimes as a row vector, x =. The coordinate space Rn may then be interpreted as the space of all n × 1 column vectors, or all 1 × n row vectors with the matrix operations of addition. Linear transformations from Rn to Rm may then be written as matrices which act on the elements of Rn via left multiplication and on elements of Rm via right multiplication
37.
Nonabelian group
–
This class of groups contrasts with the abelian groups. Nonabelian groups are pervasive in mathematics and physics, one of the simplest examples of a nonabelian group is the dihedral group of order 6. It is the smallest finite nonabelian group, a common example from physics is the rotation group SO in three dimensions. Both discrete groups and continuous groups may be nonabelian, most of the interesting Lie groups are nonabelian, and these play an important role in gauge theory. Associative algebra Noncommutative geometry Niels Henrik Abel
38.
Linear subspace
–
A linear subspace is usually called simply a subspace when the context serves to distinguish it from other kinds of subspaces. Let K be a field, V be a space over K. Then W is a if, The zero vector,0, is in W. If u and v are elements of W, then the sum u + v is an element of W, take W to be the set of all vectors in V whose last component is 0. Then W is a subspace of V. Proof, Given u and v in W, Thus, u + v is an element of W, too. Given u in W and a c in R, if u = again. Thus, cu is an element of W too, example II, Let the field be R again, but now let the vector space be the Cartesian plane R2. Take W to be the set of points of R2 such that x = y, then W is a subspace of R2. Proof, Let p = and q = be elements of W, then p + q =, since p1 = p2 and q1 = q2, then p1 + q1 = p2 + q2, so p + q is an element of W. Let p = be an element of W, that is, a point in the plane such that p1 = p2, then cp =, since p1 = p2, then cp1 = cp2, so cp is an element of W. In general, any subset of the coordinate space Rn that is defined by a system of homogeneous linear equations will yield a subspace. Geometrically, these subspaces are points, lines, planes, and so on, example III, Again take the field to be R, but now let the vector space V be the set RR of all functions from R to R. Let C be the subset consisting of continuous functions, then C is a subspace of RR. Proof, We know from calculus that 0 ∈ C ⊂ RR and we know from calculus that the sum of continuous functions is continuous. Again, we know from calculus that the product of a continuous function, example IV, Keep the same field and vector space as before, but now consider the set Diff of all differentiable functions. The same sort of argument as before shows that this is a subspace too, examples that extend these themes are common in functional analysis. A way to characterize subspaces is that they are closed under linear combinations, in a topological vector space X, a subspace W need not be closed in general, but a finite-dimensional subspace is always closed. The same is true for subspaces of finite codimension, i. e. determined by a number of continuous linear functionals
39.
Euler's rotation theorem
–
It also means that the composition of two rotations is also a rotation. Therefore the set of rotations has a structure, known as a rotation group. The theorem is named after Leonhard Euler, who proved it in 1775 by means of spherical geometry, the axis of rotation is known as an Euler axis, typically represented by a unit vector e ^. Its product by the angle is known as an axis-angle. The extension of the theorem to kinematics yields the concept of instant axis of rotation, in linear algebra terms, the theorem states that, in 3D space, any two Cartesian coordinate systems with a common origin are related by a rotation about some fixed axis. The eigenvector corresponding to this eigenvalue is the axis of rotation connecting the two systems, Euler states the theorem as follows, Theorema. Quomodocunque sphaera circa centrum suum conuertatur, semper assignari potest diameter, or, When a sphere is moved around its centre it is always possible to find a diameter whose direction in the displaced position is the same as in the initial position. Eulers original proof was made using spherical geometry and therefore whenever he speaks about triangles they must be understood as spherical triangles, to arrive at a proof, Euler analyses what the situation would look like if the theorem were true. Then he considers a great circle that does not contain O, and its image after rotation. He labels a point on their intersection as point A, now A is on the initial circle, so its image will be on the transported circle. He labels that image as point a, since A is also on the transported circle, it is the image of another point that was on the initial circle and he labels that preimage as ɑ. Then he considers the two arcs joining ɑ and a to A and these arcs have the same length because arc ɑA is mapped onto arc Aa. Also, since O is a point, triangle ɑOA is mapped onto triangle AOa, so these triangles are isosceles. Lets construct a point that could be invariant using the previous considerations and we start with the blue great circle and its image under the transformation, which is the red great circle as in the Figure 1. Let point A be a point of intersection of those circles, otherwise we label A’s image as a and its preimage as ɑ, and connect these two points to A with arcs ɑA and Aa. These arcs have the same length, then since ɑA = Aa and O is on the bisector of angle ɑAa, we also have ɑO = aO. Now lets suppose that O is the image of O, then we know angle ɑAO = angle AaO and orientation is preserved*, so O must be interior to angle ɑAa. Now AO is transformed to aO, so AO = aO, since AO is also the same length as aO, angle AaO = angle aAO
40.
Orthogonal
–
The concept of orthogonality has been broadly generalized in mathematics, as well as in areas such as chemistry, and engineering. The word comes from the Greek ὀρθός, meaning upright, and γωνία, the ancient Greek ὀρθογώνιον orthogōnion and classical Latin orthogonium originally denoted a rectangle. Later, they came to mean a right triangle, in the 12th century, the post-classical Latin word orthogonalis came to mean a right angle or something related to a right angle. In geometry, two Euclidean vectors are orthogonal if they are perpendicular, i. e. they form a right angle, two vectors, x and y, in an inner product space, V, are orthogonal if their inner product ⟨ x, y ⟩ is zero. This relationship is denoted x ⊥ y, two vector subspaces, A and B, of an inner product space, V, are called orthogonal subspaces if each vector in A is orthogonal to each vector in B. The largest subspace of V that is orthogonal to a subspace is its orthogonal complement. Given a module M and its dual M∗, an element m′ of M∗, two sets S′ ⊆ M∗ and S ⊆ M are orthogonal if each element of S′ is orthogonal to each element of S. A term rewriting system is said to be if it is left-linear and is non-ambiguous. Orthogonal term rewriting systems are confluent, a set of vectors in an inner product space is called pairwise orthogonal if each pairing of them is orthogonal. Such a set is called an orthogonal set, nonzero pairwise orthogonal vectors are always linearly independent. In certain cases, the normal is used to mean orthogonal. For example, the y-axis is normal to the curve y = x2 at the origin, however, normal may also refer to the magnitude of a vector. In particular, a set is called if it is an orthogonal set of unit vectors. As a result, use of the normal to mean orthogonal is often avoided. The word normal also has a different meaning in probability and statistics, a vector space with a bilinear form generalizes the case of an inner product. When the bilinear form applied to two results in zero, then they are orthogonal. The case of a pseudo-Euclidean plane uses the term hyperbolic orthogonality, in the diagram, axes x′ and t′ are hyperbolic-orthogonal for any given ϕ. In 2-D or higher-dimensional Euclidean space, two vectors are orthogonal if and only if their dot product is zero, i. e. they make an angle of 90°, hence orthogonality of vectors is an extension of the concept of perpendicular vectors into higher-dimensional spaces
41.
Angle of rotation
–
In mathematics, the angle of rotation is a measurement of the amount, the angle, by which a figure is rotated counterclockwise about a fixed point, often the center of a circle. A clockwise rotation is considered a negative rotation, so that for instance a rotation of 310° can also be called a rotation of –50°, for example, the carts on a Ferris wheel move along a circle around the center point of that circle. If a cart moves around the once, the angle of rotation is 360 degrees. If the cart was stuck halfway, at the top of the wheel and this is also referred to as the order of symmetry. Angles are commonly measured in degrees, radians, gons and turns, sometimes also in angular mils and they are central to polar coordinates and trigonometry. Hinge Plane of rotation Rotational symmetry
42.
Clockwise and counterclockwise
–
Rotation can occur in two possible directions. A clockwise motion is one that proceeds in the direction as a clocks hands, from the top to the right, then down and then to the left. The opposite sense of rotation or revolution is counterclockwise or anticlockwise, in a mathematical sense, a circle defined parametrically in a positive Cartesian plane by the equations x = cos t and y = sin t is traced counterclockwise as t increases in value. Before clocks were commonplace, the terms sunwise and deasil, deiseil and even deocil from the Scottish Gaelic language, widdershins or withershins was used for counterclockwise. The terms clockwise and counterclockwise can only be applied to a rotational motion once a side of the plane is specified. For example, the rotation of the Earth is clockwise when viewed from above the South Pole. Clocks traditionally follow this sense of rotation because of the clocks predecessor, clocks with hands were first built in the Northern Hemisphere, and they were made to work like sundials. In order for a sundial to work, it must be placed looking northward. Then, when the Sun moves in the sky, the shadow cast on the side of the sundial moves with the same sense of rotation. This is why hours were drawn in sundials in that manner, note, however, that on a vertical sundial, the shadow moves in the opposite direction, and some clocks were constructed to mimic this. The best-known surviving example is the clock in the Münster Cathedral. Occasionally, clocks whose hands revolve counterclockwise are nowadays sold as a novelty, historically, some Jewish clocks were built that way, for example in some synagogue towers in Europe, to accord with right-to-left reading in the Hebrew language. In 2014 under Bolivian president Evo Morales, the clock outside the Legislative Assembly in Plaza Murillo, typical nuts, screws, bolts, bottle caps, and jar lids are tightened clockwise and loosened counterclockwise in accordance with the right-hand rule. Almost all threaded objects obey this rule except for a few left-handed exceptions described below, sometimes the opposite sense of threading is used for a special reason. A thread might need to be left-handed to prevent operational stresses from loosening it, for bicycle pedals, the one on the left must be reverse-threaded to prevent it unscrewing during use. Similarly, the whorl of a spinning wheel uses a left-hand thread to keep it from loosening. A turnbuckle has right-handed threads on one end and left-handed threads on the other, in trigonometry and mathematics in general, plane angles are conventionally measured counterclockwise, starting with 0° or 0 radians pointing directly to the right, and 90° pointing straight up. However, in navigation, compass headings increase clockwise around the face, starting with 0° at the top of the compass
43.
Counterclockwise
–
Rotation can occur in two possible directions. A clockwise motion is one that proceeds in the direction as a clocks hands, from the top to the right, then down and then to the left. The opposite sense of rotation or revolution is counterclockwise or anticlockwise, in a mathematical sense, a circle defined parametrically in a positive Cartesian plane by the equations x = cos t and y = sin t is traced counterclockwise as t increases in value. Before clocks were commonplace, the terms sunwise and deasil, deiseil and even deocil from the Scottish Gaelic language, widdershins or withershins was used for counterclockwise. The terms clockwise and counterclockwise can only be applied to a rotational motion once a side of the plane is specified. For example, the rotation of the Earth is clockwise when viewed from above the South Pole. Clocks traditionally follow this sense of rotation because of the clocks predecessor, clocks with hands were first built in the Northern Hemisphere, and they were made to work like sundials. In order for a sundial to work, it must be placed looking northward. Then, when the Sun moves in the sky, the shadow cast on the side of the sundial moves with the same sense of rotation. This is why hours were drawn in sundials in that manner, note, however, that on a vertical sundial, the shadow moves in the opposite direction, and some clocks were constructed to mimic this. The best-known surviving example is the clock in the Münster Cathedral. Occasionally, clocks whose hands revolve counterclockwise are nowadays sold as a novelty, historically, some Jewish clocks were built that way, for example in some synagogue towers in Europe, to accord with right-to-left reading in the Hebrew language. In 2014 under Bolivian president Evo Morales, the clock outside the Legislative Assembly in Plaza Murillo, typical nuts, screws, bolts, bottle caps, and jar lids are tightened clockwise and loosened counterclockwise in accordance with the right-hand rule. Almost all threaded objects obey this rule except for a few left-handed exceptions described below, sometimes the opposite sense of threading is used for a special reason. A thread might need to be left-handed to prevent operational stresses from loosening it, for bicycle pedals, the one on the left must be reverse-threaded to prevent it unscrewing during use. Similarly, the whorl of a spinning wheel uses a left-hand thread to keep it from loosening. A turnbuckle has right-handed threads on one end and left-handed threads on the other, in trigonometry and mathematics in general, plane angles are conventionally measured counterclockwise, starting with 0° or 0 radians pointing directly to the right, and 90° pointing straight up. However, in navigation, compass headings increase clockwise around the face, starting with 0° at the top of the compass
44.
Unit vector
–
In mathematics, a unit vector in a normed vector space is a vector of length 1. A unit vector is denoted by a lowercase letter with a circumflex, or hat. The term direction vector is used to describe a unit vector being used to represent spatial direction, two 2D direction vectors, d1 and d2 are illustrated. 2D spatial directions represented this way are equivalent numerically to points on the unit circle, the same construct is used to specify spatial directions in 3D. As illustrated, each direction is equivalent numerically to a point on the unit sphere. The normalized vector or versor û of a vector u is the unit vector in the direction of u, i. e. u ^ = u ∥ u ∥ where ||u|| is the norm of u. The term normalized vector is used as a synonym for unit vector. Unit vectors are often chosen to form the basis of a vector space, every vector in the space may be written as a linear combination of unit vectors. By definition, in a Euclidean space the dot product of two vectors is a scalar value amounting to the cosine of the smaller subtended angle. In three-dimensional Euclidean space, the product of two arbitrary unit vectors is a 3rd vector orthogonal to both of them having length equal to the sine of the smaller subtended angle. Unit vectors may be used to represent the axes of a Cartesian coordinate system and they are often denoted using normal vector notation rather than standard unit vector notation. In most contexts it can be assumed that i, j, the notations, or, with or without hat, are also used, particularly in contexts where i, j, k might lead to confusion with another quantity. When a unit vector in space is expressed, with Cartesian notation, as a combination of i, j, k. The value of each component is equal to the cosine of the angle formed by the vector with the respective basis vector. This is one of the used to describe the orientation of a straight line, segment of straight line, oriented axis. It is important to note that ρ ^ and φ ^ are functions of φ, when differentiating or integrating in cylindrical coordinates, these unit vectors themselves must also be operated on. For a more complete description, see Jacobian matrix, to minimize degeneracy, the polar angle is usually taken 0 ≤ θ ≤180 ∘. It is especially important to note the context of any ordered triplet written in spherical coordinates, here, the American physics convention is used
45.
Sign (mathematics)
–
In mathematics, the concept of sign originates from the property of every non-zero real number of being positive or negative. Zero itself is signless, although in some contexts it makes sense to consider a signed zero, along with its application to real numbers, change of sign is used throughout mathematics and physics to denote the additive inverse, even for quantities which are not real numbers. Also, the sign can indicate aspects of mathematical objects that resemble positivity and negativity. A real number is said to be if its value is greater than zero. The attribute of being positive or negative is called the sign of the number, zero itself is not considered to have a sign. Also, signs are not defined for complex numbers, although the argument generalizes it in some sense, in common numeral notation, the sign of a number is often denoted by placing a plus sign or a minus sign before the number. For example, +3 denotes positive three, and −3 denotes negative three, when no plus or minus sign is given, the default interpretation is that a number is positive. Because of this notation, as well as the definition of numbers through subtraction. In this context, it makes sense to write − = +3, any non-zero number can be changed to a positive one using the absolute value function. For example, the value of −3 and the absolute value of 3 are both equal to 3. In symbols, this would be written |−3| =3 and |3| =3, the number zero is neither positive nor negative, and therefore has no sign. In arithmetic, +0 and −0 both denote the same number 0, which is the inverse of itself. Note that this definition is culturally determined, in France and Belgium,0 is said to be both positive and negative. The positive resp. negative numbers without zero are said to be strictly positive resp, in some contexts, such as signed number representations in computing, it makes sense to consider signed versions of zero, with positive zero and negative zero being different numbers. One also sees +0 and −0 in calculus and mathematical analysis when evaluating one-sided limits and this notation refers to the behaviour of a function as the input variable approaches 0 from positive or negative values respectively, these behaviours are not necessarily the same. Because zero is positive nor negative, the following phrases are sometimes used to refer to the sign of an unknown number. A number is negative if it is less than zero, a number is non-negative if it is greater than or equal to zero. A number is non-positive if it is less than or equal to zero, thus a non-negative number is either positive or zero, while a non-positive number is either negative or zero
46.
Hypersphere of rotations
–
In mathematics, the special orthogonal group in three dimensions, otherwise known as the rotation group SO, is a naturally occurring example of a manifold. The various charts on SO set up rival coordinate systems, in case there cannot be said to be a preferred set of parameters describing a rotation. There are three degrees of freedom, so that the dimension of SO is three, in numerous applications one or other coordinate system is used, and the question arises how to convert from a given system to another. In geometry the rotation group is the group of all rotations about the origin of three-dimensional Euclidean space R3 under the operation of composition, by definition, a rotation about the origin is a linear transformation that preserves length of vectors and preserves orientation of space. A length-preserving transformation which reverses orientation is called an improper rotation, every improper rotation of three-dimensional Euclidean space is a rotation followed by a reflection in a plane through the origin. Composing two rotations results in rotation, every rotation has a unique inverse rotation, and the identity map satisfies the definition of a rotation. Owing to the properties, the set of all rotations is a group under composition. Moreover, the group has a natural manifold structure for which the group operations are smooth. The rotation group is often denoted SO for reasons explained below, the space of rotations is isomorphic with the set of rotation operators and the set of orthonormal matrices with determinant +1. Rotation vectors notation arise from the Eulers rotation theorem which states that any rotation in three dimensions can be described by a rotation by some angle about some axis. Considering this, we can specify the axis of one of these rotations by two angles, and we can use the radius of the vector to specify the angle of rotation. These vectors represent a ball in 3D with an unusual topology and this 3D solid sphere is equivalent to the surface of a 4D sphere, which is also a 3D variety. For doing this equivalence, we will have to define how will we represent a rotation with this 4D-embedded surface and it is interesting to consider the space as the three-dimensional sphere S3, the boundary of a disk in 4-dimensional Euclidean space. For doing this, we will have to define how will we represent a rotation with this 4D-embedded surface, the way in which the radius can be used to specify the angle of rotation is not straightforward. It can be related to circles of latitude in a sphere with a north pole and is explained following. Beginning at the pole of a sphere in three-dimensional space. In the case of the identity rotation, no axis of rotation is defined, a rotation having a very small rotation angle can be specified by a slice through the sphere parallel to the xy-plane and very near the north pole. The circle defined by this slice will be small, corresponding to the small angle of the rotation
47.
Diffeomorphism
–
In mathematics, a diffeomorphism is an isomorphism of smooth manifolds. It is a function that maps one differentiable manifold to another such that both the function and its inverse are smooth. Given two manifolds M and N, a map f, M → N is called a diffeomorphism if it is a bijection and its inverse f−1. If these functions are r times continuously differentiable, f is called a Cr-diffeomorphism, two manifolds M and N are diffeomorphic if there is a diffeomorphism f from M to N. They are Cr diffeomorphic if there is an r times continuously differentiable bijective map between them whose inverse is also r times continuously differentiable, F is said to be a diffeomorphism if it is bijective, smooth and its inverse is smooth. First remark It is essential for V to be connected for the function f to be globally invertible. g. Second remark Since the differential at a point D f x, T x U → T f V is a map, it has a well-defined inverse if. The matrix representation of Dfx is the n × n matrix of partial derivatives whose entry in the i-th row. This so-called Jacobian matrix is used for explicit computations. Third remark Diffeomorphisms are necessarily between manifolds of the same dimension, imagine f going from dimension n to dimension k. If n < k then Dfx could never be surjective, in both cases, therefore, Dfx fails to be a bijection. Fourth remark If Dfx is a bijection at x then f is said to be a local diffeomorphism. Fifth remark Given a smooth map from dimension n to k, if Df is surjective, f is said to be a submersion. Sixth remark A differentiable bijection is not necessarily a diffeomorphism, F = x3, for example, is not a diffeomorphism from R to itself because its derivative vanishes at 0. This is an example of a homeomorphism that is not a diffeomorphism, seventh remark When f is a map between differentiable manifolds, a diffeomorphic f is a stronger condition than a homeomorphic f. For a diffeomorphism, f and its inverse need to be differentiable, for a homeomorphism, f, every diffeomorphism is a homeomorphism, but not every homeomorphism is a diffeomorphism. F, M → N is called a diffeomorphism if, in coordinate charts, more precisely, Pick any cover of M by compatible coordinate charts and do the same for N. Let φ and ψ be charts on, respectively, M and N, with U and V as, respectively, the map ψfφ−1, U → V is then a diffeomorphism as in the definition above, whenever f ⊂ ψ−1
48.
Quotient space (topology)
–
In topology and related areas of mathematics, a quotient space is, intuitively speaking, the result of identifying or gluing together certain points of a given topological space. The points to be identified are specified by an equivalence relation and this is commonly done in order to construct new spaces from given ones. The quotient topology consists of all sets with an open preimage under the projection map that maps each element to its equivalence class. The quotient topology is the topology on the quotient space with respect to the map q. A map f, X → Y is a quotient map if it is surjective, equivalently, f is a quotient map if it is onto and Y is equipped with the final topology with respect to f. Given an equivalence relation ∼ on X, the map q, X → X / ∼ is a quotient map. Topologists talk of gluing points together, consider the unit square I2 = × and the equivalence relation ~ generated by the requirement that all boundary points be equivalent, thus identifying all boundary points to a single equivalence class. Then I2/~ is homeomorphic to the unit sphere S2, more generally, suppose X is a space and A is a subspace of X. One can identify all points in A to an equivalence class. The resulting quotient space is denoted X/A, the 2-sphere is then homeomorphic to the unit disc with its boundary identified to a single point, D2 / ∂ D2. Consider the set X = R of all real numbers with the ordinary topology, then the quotient space X/~ is homeomorphic to the unit circle S1 via the homeomorphism which sends the equivalence class of x to exp. A generalization of the example is the following, Suppose a topological group G acts continuously on a space X. One can form a relation on X by saying points are equivalent if. The quotient space under this relation is called the orbit space, in the previous example G = Z acts on R by translation. The orbit space R/Z is homeomorphic to S1, note, The notation R/Z is somewhat ambiguous. If Z is understood to be a group acting on R then the quotient is the circle, however, if Z is thought of as a subspace of R, then the quotient is a countably infinite bouquet of circles joined at a single point. We say that g descends to the quotient, the continuous maps defined on X/~ are therefore precisely those maps which arise from continuous maps defined on X that respect the equivalence relation. This criterion is used when studying quotient spaces
49.
Antipodal point
–
This term applies to opposite points on a circle or any n-sphere. An antipodal point is called an antipode, a back-formation from the Greek loan word antipodes. On a circle, such points are also called diametrically opposite, in other words, each line through the centre intersects the sphere in two points, one for each ray out from the centre, and these two points are antipodal. The Borsuk–Ulam theorem is a result from algebraic topology dealing with pairs of points. It says that any function from Sn to Rn maps some pair of antipodal points in Sn to the same point in Rn. Here, Sn denotes the n-dimensional sphere in -dimensional space, the antipodal map A, Sn → Sn, defined by A = −x, sends every point on the sphere to its antipodal point. It is homotopic to the identity map if n is odd, if one wants to consider antipodal points as identified, one passes to projective space. This article incorporates text from a now in the public domain, Chisholm, Hugh. Hazewinkel, Michiel, ed. Antipodes, Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4 antipodal
50.
Topological space
–
Other spaces, such as manifolds and metric spaces, are specializations of topological spaces with extra structures or constraints. Being so general, topological spaces are a central unifying notion, the branch of mathematics that studies topological spaces in their own right is called point-set topology or general topology. The utility of the notion of a topology is shown by the fact there are several equivalent definitions of this structure. Thus one chooses the axiomatisation suited for the application, the most commonly used, and the most elegant, is that in terms of open sets, but the most intuitive is that in terms of neighbourhoods and so this is given first. Note, A variety of other axiomatisations of topological spaces are listed in the Exercises of the book by Vaidyanathaswamy and this axiomatization is due to Felix Hausdorff. Let X be a set, the elements of X are usually called points, let N be a function assigning to each x in X a non-empty collection N of subsets of X. The elements of N will be called neighbourhoods of x with respect to N, the function N is called a neighbourhood topology if the axioms below are satisfied, and then X with N is called a topological space. If N is a neighbourhood of x, then x ∈ N, in other words, each point belongs to every one of its neighbourhoods. If N is a subset of X and includes a neighbourhood of x, I. e. every superset of a neighbourhood of a point x in X is again a neighbourhood of x. The intersection of two neighbourhoods of x is a neighbourhood of x, any neighbourhood N of x includes a neighbourhood M of x such that N is a neighbourhood of each point of M. The first three axioms for neighbourhoods have a clear meaning, the fourth axiom has a very important use in the structure of the theory, that of linking together the neighbourhoods of different points of X. A standard example of such a system of neighbourhoods is for the real line R, given such a structure, we can define a subset U of X to be open if U is a neighbourhood of all points in U. A topological space is a pair, where X is a set and τ is a collection of subsets of X, satisfying the following axioms, The empty set. Any union of members of τ still belongs to τ, the intersection of any finite number of members of τ still belongs to τ. The elements of τ are called open sets and the collection τ is called a topology on X, given X =, the collection τ = of only the two subsets of X required by the axioms forms a topology of X, the trivial topology. Given X =, the collection τ = of six subsets of X forms another topology of X, given X = and the collection τ = P, is a topological space. τ is called the discrete topology, using de Morgans laws, the above axioms defining open sets become axioms defining closed sets, The empty set and X are closed. The intersection of any collection of closed sets is also closed, the union of any finite number of closed sets is also closed
51.
Homeomorphic
–
In the mathematical field of topology, a homeomorphism or topological isomorphism or bi continuous function is a continuous function between topological spaces that has a continuous inverse function. Homeomorphisms are the isomorphisms in the category of topological spaces—that is, two spaces with a homeomorphism between them are called homeomorphic, and from a topological viewpoint they are the same. The word homeomorphism comes from the Greek words ὅμοιος = similar and μορφή = shape, roughly speaking, a topological space is a geometric object, and the homeomorphism is a continuous stretching and bending of the object into a new shape. Thus, a square and a circle are homeomorphic to each other, but a sphere and a torus are not. A function f, X → Y between two spaces and is called a homeomorphism if it has the following properties, f is a bijection, f is continuous. A function with three properties is sometimes called bicontinuous. If such a function exists, we say X and Y are homeomorphic, a self-homeomorphism is a homeomorphism of a topological space and itself. The homeomorphisms form a relation on the class of all topological spaces. The resulting equivalence classes are called homeomorphism classes, the open interval is homeomorphic to the real numbers R for any a < b. The unit 2-disc D2 and the square in R2 are homeomorphic. An example of a mapping from the square to the disc is, in polar coordinates. The graph of a function is homeomorphic to the domain of the function. A differentiable parametrization of a curve is an homeomorphism between the domain of the parametrization and the curve, a chart of a manifold is an homeomorphism between an open subset of the manifold and an open subset of a Euclidean space. The stereographic projection is a homeomorphism between the sphere in R3 with a single point removed and the set of all points in R2. If G is a group, its inversion map x ↦ x −1 is a homeomorphism. Also, for any x ∈ G, the left translation y ↦ x y, the right translation y ↦ y x, rm and Rn are not homeomorphic for m ≠ n. The Euclidean real line is not homeomorphic to the circle as a subspace of R2, since the unit circle is compact as a subspace of Euclidean R2. The third requirement, that f −1 be continuous, is essential, consider for instance the function f, [0, 2π) → S1 defined by f =
52.
Smooth manifold
–
In mathematics, a differentiable manifold is a type of manifold that is locally similar enough to a linear space to allow one to do calculus. Any manifold can be described by a collection of charts, also known as an atlas, one may then apply ideas from calculus while working within the individual charts, since each chart lies within a linear space to which the usual rules of calculus apply. If the charts are suitably compatible, then computations done in one chart are valid in any other differentiable chart, in formal terms, a differentiable manifold is a topological manifold with a globally defined differential structure. Any topological manifold can be given a differential structure locally by using the homeomorphisms in its atlas and the standard differential structure on a linear space. In other words, where the domains of overlap, the coordinates defined by each chart are required to be differentiable with respect to the coordinates defined by every chart in the atlas. The maps that relate the coordinates defined by the charts to one another are called transition maps. Differentiability means different things in different contexts including, continuously differentiable, k times differentiable, smooth, furthermore, the ability to induce such a differential structure on an abstract space allows one to extend the definition of differentiability to spaces without global coordinate systems. A differential structure allows one to define the globally differentiable tangent space, differentiable functions, differentiable manifolds are very important in physics. Special kinds of differentiable manifolds form the basis for theories such as classical mechanics, general relativity. It is possible to develop a calculus for differentiable manifolds and this leads to such mathematical machinery as the exterior calculus. The study of calculus on differentiable manifolds is known as differential geometry, the emergence of differential geometry as a distinct discipline is generally credited to Carl Friedrich Gauss and Bernhard Riemann. Riemann first described manifolds in his famous habilitation lecture before the faculty at Göttingen and these ideas found a key application in Einsteins theory of general relativity and its underlying equivalence principle. A modern definition of a 2-dimensional manifold was given by Hermann Weyl in his 1913 book on Riemann surfaces, the widely accepted general definition of a manifold in terms of an atlas is due to Hassler Whitney. A presentation of a manifold is a second countable Hausdorff space that is locally homeomorphic to a linear space. This formalizes the notion of patching together pieces of a space to make a manifold – the manifold produced also contains the data of how it has been patched together, However, different atlases may produce the same manifold, a manifold does not come with a preferred atlas. And, thus, one defines a manifold to be a space as above with an equivalence class of atlases. There are a number of different types of manifolds, depending on the precise differentiability requirements on the transition functions. Some common examples include the following, a differentiable manifold is a topological manifold equipped with an equivalence class of atlases whose transition maps are all differentiable
53.
Connectedness
–
In mathematics, connectedness is used to refer to various properties meaning, in some sense, all one piece. When a mathematical object has such a property, we say it is connected, when a disconnected object can be split naturally into connected pieces, each piece is usually called a component. A topological space is said to be connected if it is not the union of two disjoint nonempty open sets, fields of mathematics are typically concerned with special kinds of objects. Often such an object is said to be connected if, when it is considered as a topological space, thus, manifolds, Lie groups, and graphs are all called connected if they are connected as topological spaces, and their components are the topological components. Sometimes it is convenient to restate the definition of connectedness in such fields, for example, a graph is said to be connected if each pair of vertices in the graph is joined by a path. This definition is equivalent to the one, as applied to graphs. Graph theory also offers a measure of connectedness, called the clustering coefficient. Other fields of mathematics are concerned with objects that are considered as topological spaces. Nonetheless, definitions of connectedness often reflect the meaning in some way. For example, in theory, a category is said to be connected if each pair of objects in it is joined by a sequence of morphisms. Thus, a category is connected if it is, intuitively, there may be different notions of connectedness that are intuitively similar, but different as formally defined concepts. We might wish to call a topological space connected if each pair of points in it is joined by a path, however this concept turns out to be different from standard topological connectedness, in particular, there are connected topological spaces for which this property does not hold. Because of this, different terminology is used, spaces with this property are said to be path connected, while not all connected spaces are path connected, all path connected spaces are connected. Terms involving connected are also used for properties that are related to, thus, a sphere and a disk are each simply connected, while a torus is not. As another example, a graph is strongly connected if each ordered pair of vertices is joined by a directed path. Other concepts express the way in which an object is not connected, for example, a topological space is totally disconnected if each of its components is a single point. Properties and parameters based on the idea of connectedness often involve the word connectivity, for example, in graph theory, a connected graph is one from which we must remove at least one vertex to create a disconnected graph. In recognition of this, such graphs are said to be 1-connected
54.
Simply connected
–
If a space is not simply-connected, it is convenient to measure the extent to which it fails to be simply-connected, this is done by the fundamental group. Intuitively, the fundamental group measures how the holes behave on a space, if there are no holes, the group is trivial — equivalently. Informally, an object in our space is simply-connected if it consists of one piece. For example, neither a doughnut nor a cup is simply connected. In two dimensions, a circle is not simply-connected, but a disk and a line are, spaces that are connected but not simply connected are called non–simply connected or, in a somewhat old-fashioned term, multiply connected. To illustrate the notion of connectedness, suppose we are considering an object in three dimensions, for example, an object in the shape of a box, a doughnut. Think of the object as a strangely shaped aquarium full of water, with rigid sides. Now think of a diver who takes a piece of string and trails it through the water inside the aquarium, in whatever way he pleases. Now the loop begins to contract on itself, getting smaller and smaller, if the loop can always shrink all the way to a point, then the aquariums interior is simply connected. If sometimes the loop gets caught — for example, around the hole in the doughnut — then the object is not simply-connected. Notice that the only rules out handle-shaped holes. A sphere is connected, because any loop on the surface of a sphere can contract to a point. The stronger condition, that the object has no holes of any dimension, is called contractibility, intuitively, this means that p can be continuously deformed to get q while keeping the endpoints fixed. Hence the term simply connected, for any two points in X, there is one and essentially only one path connecting them. A third way to express the same, X is simply-connected if and only if X is path-connected and the fundamental group of X at each of its points is trivial, i. e. consists only of the identity element. Yet another formulation is used in complex analysis, an open subset X of C is simply-connected if. It might also be worth pointing out that a relaxation of the requirement that X be connected leads to an exploration of open subsets of the plane with connected extended complement. For example, a set has connected extended complement exactly when each of its connected components are simply-connected
55.
Turn (geometry)
–
A turn is a unit of plane angle measurement equal to 2π radians, 360° or 400 gon. A turn is also referred to as a revolution or complete rotation or full circle or cycle or rev or rot, a turn can be subdivided in many different ways, into half turns, quarter turns, centiturns, milliturns, binary angles, points etc. A turn can be divided in 100 centiturns or 1000 milliturns, with each corresponding to an angle of 0. 36°. A protractor divided in centiturns is normally called a percentage protractor, binary fractions of a turn are also used. Sailors have traditionally divided a turn into 32 compass points, the binary degree, also known as the binary radian, is 1⁄256 turn. The binary degree is used in computing so that an angle can be represented to the maximum possible precision in a single byte, other measures of angle used in computing may be based on dividing one whole turn into 2n equal parts for other values of n. The notion of turn is used for planar rotations. Two special rotations have acquired appellations of their own, a rotation through 180° is commonly referred to as a half-turn, the word turn originates via Latin and French from the Greek word τόρνος. In 1697, David Gregory used π/ρ to denote the perimeter of a divided by its radius. However, earlier in 1647, William Oughtred had used δ/π for the ratio of the diameter to perimeter, the first use of the symbol π on its own with its present meaning was in 1706 by the Welsh mathematician William Jones. Euler adopted the symbol with that meaning in 1737, leading to its widespread use, percentage protractors have existed since 1922, but the terms centiturns and milliturns were introduced much later by Sir Fred Hoyle. The German standard DIN1315 proposed the unit symbol pla for turns, since 2011, the HP 39gII and HP Prime support the unit symbol tr for turns. In 2016, support for turns was also added to newRPL for the HP 50g, one turn is equal to 2π radians. In 1958, Albert Eagle proposed the Greek letter tau τ as a symbol for 1/2π and his proposal used a pi with three legs symbol to denote the constant. In 2010, Michael Hartl proposed to use tau to represent Palais circle constant, τ=2π. First, τ is the number of radians in one turn, which allows fractions of a turn to be expressed directly, for instance. Second, τ visually resembles π, whose association with the constant is unavoidable. Hartls Tau Manifesto gives many examples of formulas that are simpler if tau is used instead of pi, however, a rebuttal was given in The Pi Manifesto, stating a variety of reasons tau should not supplant pi
56.
Cyclic group
–
In algebra, a cyclic group or monogenous group is a group that is generated by a single element. Each element can be written as a power of g in multiplicative notation and this element g is called a generator of the group. Every infinite cyclic group is isomorphic to the group of Z. Every finite cyclic group of n is isomorphic to the additive group of Z/nZ. Every cyclic group is a group, and every finitely generated abelian group is a direct product of cyclic groups. A group G is called if there exists an element g in G such that G = ⟨g⟩ =. Since any group generated by an element in a group is a subgroup of that group, for example, if G = is a group of order 6, then g6 = g0, and G is cyclic. In fact, G is essentially the same as the set with addition modulo 6, for example,1 +2 ≡3 corresponds to g1 · g2 = g3, and 2 +5 ≡1 corresponds to g2 · g5 = g7 = g1, and so on. One can use the isomorphism χ defined by χ = i, the name cyclic may be misleading, it is possible to generate infinitely many elements and not form any literal cycles, that is, every gn is distinct. A group generated in this way is called a cyclic group. The French mathematicians known as Nicolas Bourbaki referred to a group as a monogenous group. The set of integers, with the operation of addition, forms a group and it is an infinite cyclic group, because all integers can be written as a finite sum or difference of copies of the number 1. In this group,1 and −1 are the only generators, every infinite cyclic group is isomorphic to this group. For every positive n, the set of integers modulo n, again with the operation of addition, forms a finite cyclic group. An element g is a generator of this group if g is relatively prime to n, thus, the number of different generators is φ, where φ is the Euler totient function, the function that counts the number of numbers modulo n that are relatively prime to n. Every finite cyclic group is isomorphic to a group Z/n, where n is the order of the group, the integer and modular addition operations, used to define the cyclic groups, are both the addition operations of commutative rings, also denoted Z and Z/n. If p is a prime, then Z/p is a finite field, every field with p elements is isomorphic to this one. For every positive n, the subset of the integers modulo n that are relatively prime to n, with the operation of multiplication
57.
Physics
–
Physics is the natural science that involves the study of matter and its motion and behavior through space and time, along with related concepts such as energy and force. One of the most fundamental disciplines, the main goal of physics is to understand how the universe behaves. Physics is one of the oldest academic disciplines, perhaps the oldest through its inclusion of astronomy, Physics intersects with many interdisciplinary areas of research, such as biophysics and quantum chemistry, and the boundaries of physics are not rigidly defined. New ideas in physics often explain the mechanisms of other sciences while opening new avenues of research in areas such as mathematics. Physics also makes significant contributions through advances in new technologies that arise from theoretical breakthroughs, the United Nations named 2005 the World Year of Physics. Astronomy is the oldest of the natural sciences, the stars and planets were often a target of worship, believed to represent their gods. While the explanations for these phenomena were often unscientific and lacking in evidence, according to Asger Aaboe, the origins of Western astronomy can be found in Mesopotamia, and all Western efforts in the exact sciences are descended from late Babylonian astronomy. The most notable innovations were in the field of optics and vision, which came from the works of many scientists like Ibn Sahl, Al-Kindi, Ibn al-Haytham, Al-Farisi and Avicenna. The most notable work was The Book of Optics, written by Ibn Al-Haitham, in which he was not only the first to disprove the ancient Greek idea about vision, but also came up with a new theory. In the book, he was also the first to study the phenomenon of the pinhole camera, many later European scholars and fellow polymaths, from Robert Grosseteste and Leonardo da Vinci to René Descartes, Johannes Kepler and Isaac Newton, were in his debt. Indeed, the influence of Ibn al-Haythams Optics ranks alongside that of Newtons work of the same title, the translation of The Book of Optics had a huge impact on Europe. From it, later European scholars were able to build the devices as what Ibn al-Haytham did. From this, such important things as eyeglasses, magnifying glasses, telescopes, Physics became a separate science when early modern Europeans used experimental and quantitative methods to discover what are now considered to be the laws of physics. Newton also developed calculus, the study of change, which provided new mathematical methods for solving physical problems. The discovery of new laws in thermodynamics, chemistry, and electromagnetics resulted from greater research efforts during the Industrial Revolution as energy needs increased, however, inaccuracies in classical mechanics for very small objects and very high velocities led to the development of modern physics in the 20th century. Modern physics began in the early 20th century with the work of Max Planck in quantum theory, both of these theories came about due to inaccuracies in classical mechanics in certain situations. Quantum mechanics would come to be pioneered by Werner Heisenberg, Erwin Schrödinger, from this early work, and work in related fields, the Standard Model of particle physics was derived. Areas of mathematics in general are important to this field, such as the study of probabilities, in many ways, physics stems from ancient Greek philosophy
58.
Spinor
–
In geometry and physics, spinors are elements of a vector space that can be associated with Euclidean space. Like geometric vectors and more general tensors, spinors transform linearly when the Euclidean space is subjected to a slight rotation and it is also possible to associate a substantially similar notion of spinor to Minkowski space in which case the Lorentz transformations of special relativity play the role of rotations. Spinors were introduced in geometry by Élie Cartan in 1913, in the 1920s physicists discovered that spinors are essential to describe the intrinsic angular momentum, or spin, of the electron and other subatomic particles. Spinors are characterized by the way in which they behave under rotations. They change in different ways depending not just on the final rotation. There are two topologically distinguishable classes of paths through rotations that result in the overall rotation, as famously illustrated by the belt trick puzzle. These two inequivalent classes yield spinor transformations of opposite sign, the spin group is the group of all rotations keeping track of the class. It doubly covers the group, since each rotation can be obtained in two inequivalent ways as the endpoint of a path. The Clifford algebra is an algebra that can be constructed from Euclidean space. Both the spin group and its Lie algebra are embedded inside the Clifford algebra in a natural way, after choosing an orthonormal basis of Euclidean space, a representation of the Clifford algebra is generated by gamma matrices, matrices that satisfy a set of canonical anti-commutation relations. The spinors are the vectors on which these matrices act. In three Euclidean dimensions, for instance, the Pauli spin matrices are a set of matrices. However, the matrix representation of the Clifford algebra, and hence what precisely constitutes a column vector, involves the choice of basis. What characterizes spinors and distinguishes them from vectors and other tensors is subtle. Consider applying a rotation to the coordinates of a system, no object in the system itself has moved, only the coordinates have, so there will always be a compensating change in those coordinate values when applied to any object of the system. Geometrical vectors, for example, have components that undergo the same rotation as the coordinates. More broadly, any tensor associated with the system also has coordinate descriptions that adjust to compensate for changes to the system itself. Spinors do not appear at this level of the description of a physical system, rather, spinors appear when we imagine that instead of a single rotation, the coordinate system is gradually rotated between some initial and final configuration
59.
Spin-statistics theorem
–
In quantum mechanics, the spin–statistics theorem relates the intrinsic spin of a particle to the particle statistics it obeys. In units of the reduced Planck constant ħ, all particles have integer spin or half-integer spin. In a quantum system, a state is described by a state vector. A pair of distinct state vectors are physically equivalent if their value is equal. A pair of indistinguishable particles such as this have only one state and this means that if the positions of the particles are exchanged, this does not identify a new physical state, but rather one matching the original physical state. In fact, one cannot tell which particle is in which position, while the physical state does not change under the exchange of the particles positions, it is possible for the state vector to be negated as a result of an exchange. Since this does not change the value of the state vector. The essential ingredient in proving the spin/statistics relation is relativity, that the laws do not change under Lorentz transformations. The field operators transform under Lorentz transformations according to the spin of the particle that they create, additionally, the assumption that spacelike separated fields either commute or anticommute can be made only for relativistic theories with a time direction. Otherwise, the notion of being spacelike is meaningless, however, the proof involves looking at a Euclidean version of spacetime, in which the time direction is treated as a spatial one, as will be now explained. Lorentz transformations include 3-dimensional rotations as well as boosts, a boost transfers to a frame of reference with a different velocity, and is mathematically like a rotation into time. By analytic continuation of the functions of a quantum field theory, the time coordinate may become imaginary. The new spacetime has only spatial directions and is termed Euclidean, bosons are particles whose wavefunction is symmetric under such an exchange, so if we swap the particles the wavefunction does not change. This is the Pauli exclusion principle, two identical fermions cannot occupy the same state and this rule does not hold for bosons. In quantum field theory, a state or a wavefunction is described by field operators operating on some basic state called the vacuum, in order for the operators to project out the symmetric or antisymmetric component of the creating wavefunction, they must have the appropriate commutation law. Let us assume that x ≠ y and the two take place at the same time, more generally, they may have spacelike separation. If the fields commute, meaning that the following holds, ϕ ϕ = ϕ ϕ, then only the part of ψ contributes, so that ψ = ψ. Naively, neither has anything to do with the spin, which determines the properties of the particles
60.
Universal cover
–
In this case, C is called a covering space and X the base space of the covering projection. The definition implies that every covering map is a local homeomorphism, Covering spaces play an important role in homotopy theory, harmonic analysis, Riemannian geometry and differential topology. In Riemannian geometry for example, ramification is a generalization of the notion of covering maps, Covering spaces are also deeply intertwined with the study of homotopy groups and, in particular, the fundamental group. Let X be a topological space, the map p is called the covering map, the space X is often called the base space of the covering, and the space C is called the total space of the covering. For any point x in the base the inverse image of x in C is necessarily a discrete space called the fiber over x, the special open neighborhoods U of x given in the definition are called evenly covered neighborhoods. The evenly covered neighborhoods form an open cover of the space X, the homeomorphic copies in C of an evenly covered neighborhood U are called the sheets over U. In particular, covering maps are locally trivial, many authors impose some connectivity conditions on the spaces X and C in the definition of a covering map. In particular, many authors require both spaces to be path-connected and locally path-connected and this can prove helpful because many theorems hold only if the spaces in question have these properties. Some authors omit the assumption of surjectivity, for if X is connected, a connected and locally path-connected topological space X has a universal cover if and only if it is semi-locally simply connected. ℝ is the cover of the unit circle S1. The spin group Spin is a cover of the special orthogonal group. The accidental, or exceptional isomorphisms for Lie groups then give isomorphisms between spin groups in low dimension and classical Lie groups, the unitary group U has universal cover SU × ℝ. The n-sphere Sn is a cover of real projective space RPn and is a universal cover for n >1. Every manifold has a double cover that is connected if. The uniformization theorem asserts that every Riemann surface has a universal cover conformally equivalent to the Riemann sphere, the universal cover of a wedge of n circles is the Cayley graph of the free group on n generators, i. e. a Bethe lattice. The torus is a cover of the Klein bottle. Every graph has a double cover. Since every graph is homotopic to a wedge of circles, its cover is a Cayley graph
61.
Spinor group
–
In mathematics the spin group Spin is the double cover of the special orthogonal group SO = SO, such that there exists a short exact sequence of Lie groups 1 → Z2 → Spin → SO →1. As a Lie group, Spin therefore shares its dimension, n/2, for n >2, Spin is simply connected and so coincides with the universal cover of SO. The non-trivial element of the kernel is denoted −1, which should not be confused with the transform of reflection through the origin. Spin can be constructed as a subgroup of the elements in the Clifford algebra Cl. A distinct article discusses the spin representations, construction of the Spin group often starts with the construction of the Clifford algebra over a real vector space V. The Clifford algebra is the quotient of the tensor algebra TV of V by a two-sided ideal. The resulting space is naturally graded, and can be written as Cl = Cl 0 ⊕ Cl 1 ⊕ Cl 2 ⊕ ⋯ where Cl 0 = R and Cl 1 = V. The spin algebra s p i n is defined as Cl 2 = s p i n = s p i n, where the last is a short-hand for V being a real vector space of real dimension n. It is a Lie algebra, it has an action on V. Note that many authors drop the use of the tensor symbol ⊗, making it implicit, here, however, it is shown explicitly, to keep the construction clear. The spin group is defined as Spin = Pin ∩ Cl even where Cl e v e n = Cl 0 ⊕ Cl 2 ⊕ Cl 4 ⊕ ⋯ is the subspace of an even number of products. That is, Spin consists of all elements of Pin, given above, the restriction to the even subspace is key to the formation of two-component spinors, constructed below. This anti-commutation turns out to be of importance in physics. This anti-commutation property is also a key ingredient for the formulation of supersymmetry, the Clifford algebra and the spin group have many interesting and curious properties, some of which are listed below. A double covering of SO by Spin can be given explicitly, let be an orthonormal basis for V. The above gives a double covering of both O by Pin and of SO by Spin, with a small amount of work, it can be seen that ρ corresponds to reflection across a hyperplane, this follows from the anti-commuting property of the Clifford algebra. It is worth reviewing how spinor space and Weyl spinors are constructed, given a real vector space V of dimension n = 2m an even number, its complexification is V ⊗ C. It is straightforward to see that the spinors anti-commute, and that the product of a spinor and anti-spinor is a scalar, the spinor space is defined as the exterior algebra ⋀ W
62.
Special unitary group
–
In mathematics, the special unitary group of degree n, denoted SU, is the Lie group of n×n unitary matrices with determinant 1. The group operation is matrix multiplication, the special unitary group is a subgroup of the unitary group U, consisting of all n×n unitary matrices. As a compact group, U is the group that preserves the standard inner product on Cn. It is itself a subgroup of the linear group, SU ⊂ U ⊂ GL. The SU groups find application in the Standard Model of particle physics, especially SU in the electroweak interaction. The simplest case, SU, is the group, having only a single element. The group SU is isomorphic to the group of quaternions of norm 1, since unit quaternions can be used to represent rotations in 3-dimensional space, there is a surjective homomorphism from SU to the rotation group SO whose kernel is. SU is also identical to one of the groups of spinors, Spin. The special unitary group SU is a real Lie group and its dimension as a real manifold is n2 −1. Topologically, it is compact and simply connected, algebraically, it is a simple Lie group. The center of SU is isomorphic to the cyclic group Zn and its outer automorphism group, for n ≥3, is Z2, while the outer automorphism group of SU is the trivial group. A maximal torus, of rank n −1, is given by the set of matrices with determinant 1. The Weyl group is the symmetric group Sn, which is represented by signed permutation matrices, the Lie algebra of SU, denoted by su, can be identified with the set of traceless antihermitian n×n complex matrices, with the regular commutator as Lie bracket. Particle physicists often use a different, equivalent representation, the set of traceless hermitian n×n complex matrices with Lie bracket given by −i times the commutator, the Lie algebra su can be generated by n2 operators O ^ i j, i, j=1,2. N, which satisfy the commutator relationships = δ j k O ^ i ℓ − δ i ℓ O ^ k j for i, j, k, ℓ =1,2, N, where δjk denotes the Kronecker delta. Additionally, the operator N ^ = ∑ i =1 n O ^ i i satisfies =0, which implies that the number of independent generators of the Lie algebra is n2 −1. We also take ∑ c, e =1 n 2 −1 d a c e d b c e = n 2 −4 n δ a b as a normalization convention. In the -dimensional adjoint representation, the generators are represented by × matrices, whose elements are defined by the structure constants themselves, SU is the following group, S U =, where the overline denotes complex conjugation
63.
3-sphere
–
In mathematics, a 3-sphere is a higher-dimensional analogue of a sphere. It consists of the set of points equidistant from a central point in 4-dimensional Euclidean space. A 3-sphere is an example of a 3-manifold, in coordinates, a 3-sphere with center and radius r is the set of all points in real, 4-dimensional space such that ∑ i =032 =2 +2 +2 +2 = r 2. The 3-sphere centered at the origin with radius 1 is called the unit 3-sphere and is usually denoted S3 and it is often convenient to regard R4 as the space with 2 complex dimensions or the quaternions. The unit 3-sphere is then given by S3 = or S3 = and this description as the quaternions of norm one, identifies the 3-sphere with the versors in the quaternion division ring. Just as the circle is important for planar polar coordinates. See polar decomposition of a quaternion for details of development of the three-sphere. This view of the 3-sphere is the basis for the study of space as developed by Georges Lemaître. The 3-dimensional cubic hyperarea of a 3-sphere of radius r is 2 π2 r 3 while the 4-dimensional quartic hypervolume is 12 π2 r 4, every non-empty intersection of a 3-sphere with a three-dimensional hyperplane is a 2-sphere. Then the 2-sphere shrinks again down to a point as the 3-sphere leaves the hyperplane. A 3-sphere is a compact, connected, 3-dimensional manifold without boundary, what this means, in the broad sense, is that any loop, or circular path, on the 3-sphere can be continuously shrunk to a point without leaving the 3-sphere. The Poincaré conjecture, proved in 2003 by Grigori Perelman, provides that the 3-sphere is the only three-dimensional manifold with these properties, the 3-sphere is homeomorphic to the one-point compactification of R3. In general, any space that is homeomorphic to the 3-sphere is called a topological 3-sphere. The homology groups of the 3-sphere are as follows, H0, any topological space with these homology groups is known as a homology 3-sphere. Initially Poincaré conjectured that all homology 3-spheres are homeomorphic to S3, infinitely many homology spheres are now known to exist. For example, a Dehn filling with slope 1/n on any knot in the 3-sphere gives a homology sphere, as to the homotopy groups, we have π1 = π2 = and π3 is infinite cyclic. The higher-homotopy groups are all finite abelian but otherwise follow no discernible pattern, for more discussion see homotopy groups of spheres. The 3-sphere is naturally a smooth manifold, in fact, an embedded submanifold of R4
64.
Versor
–
Versors are an algebraic parametrisation of rotations. In classical quaternion theory a versor is a quaternion of norm one. Each versor has the form q = exp = cos a + r sin a, r 2 = −1, a ∈, in case a = π/2, the versor is termed a right versor. The corresponding 3-dimensional rotation has the angle 2a about the axis r in axis–angle representation, the word is derived from Latin versare = to turn with the suffix -or forming a noun from the verb. It was introduced by William Rowan Hamilton in the context of his quaternion theory, for historical reasons, it sometimes is used synonymously with a unit quaternion without a reference to rotations. In the quaternion algebra a versor q = exp will rotate any quaternion v through the product map v ↦ q v q −1 such that the scalar part of v is preserved. If this scalar part is zero, i. e. v is a Euclidean vector in three dimensions, then the formula above defines the rotation through the angle 2a around the vector r. In other words, qvq−1 rotates the vector part of v around the vector r, see quaternions and spatial rotation for details. A quaternionic versor expressed in the complex 2×2 matrix representation is an element of the unitary group SU. Spin and SU are the same group, angles of rotation in this λ = 1/2 representation are equal to a, there is no 2 factor in angles unlike the λ =1 adjoint representation mentioned above, see representation theory of SU for details. For a fixed r, versors of the form exp where a ∈ (−π, π], in 2003 David W. Lyons wrote the fibers of the Hopf map are circles in S3. Lyons gives an introduction to quaternions to elucidate the Hopf fibration as a mapping on unit quaternions. Hamilton denoted the versor of a quaternion q by the symbol Uq and he was then able to display the general quaternion in polar coordinate form q = Tq Uq, where Tq is the norm of q. The norm of a versor is always equal to one, hence they occupy the unit 3-sphere in H, examples of versors include the eight elements of the quaternion group. Of particular importance are the right versors, which have angle π/2 and these versors have zero scalar part, and so are vectors of length one. The right versors form a sphere of square roots of −1 in the quaternion algebra, the generators i, j, and k are examples of right versors, as well as their additive inverses. Other versors include the twenty-four Hurwitz quaternions that have the norm 1, Hamilton defined a quaternion as the quotient of two vectors. A versor can be defined as the quotient of two unit vectors, for any fixed plane Π the quotient of two unit vectors lying in Π depends only on the angle between them, the same a as in the unit vector–angle representation of a versor explained above
65.
Quaternion
–
In mathematics, the quaternions are a number system that extends the complex numbers. They were first described by Irish mathematician William Rowan Hamilton in 1843, a feature of quaternions is that multiplication of two quaternions is noncommutative. Hamilton defined a quaternion as the quotient of two directed lines in a space or equivalently as the quotient of two vectors. Quaternions are generally represented in the form, a + bi + cj + dk where a, b, c, and d are real numbers, and i, j, and k are the fundamental quaternion units. In practical applications, they can be used other methods, such as Euler angles and rotation matrices, or as an alternative to them. In modern mathematical language, quaternions form a four-dimensional associative normed division algebra over the real numbers, in fact, the quaternions were the first noncommutative division algebra to be discovered. The algebra of quaternions is often denoted by H, or in blackboard bold by H and it can also be given by the Clifford algebra classifications Cℓ0,2 ≅ Cℓ03,0. These rings are also Euclidean Hurwitz algebras, of which quaternions are the largest associative algebra. The unit quaternions can be thought of as a choice of a structure on the 3-sphere S3 that gives the group Spin. Quaternion algebra was introduced by Hamilton in 1843, carl Friedrich Gauss had also discovered quaternions in 1819, but this work was not published until 1900. Hamilton knew that the numbers could be interpreted as points in a plane. Points in space can be represented by their coordinates, which are triples of numbers, however, Hamilton had been stuck on the problem of multiplication and division for a long time. He could not figure out how to calculate the quotient of the coordinates of two points in space. The great breakthrough in quaternions finally came on Monday 16 October 1843 in Dublin, as he walked along the towpath of the Royal Canal with his wife, the concepts behind quaternions were taking shape in his mind. When the answer dawned on him, Hamilton could not resist the urge to carve the formula for the quaternions, i2 = j2 = k2 = ijk = −1, into the stone of Brougham Bridge as he paused on it. On the following day, Hamilton wrote a letter to his friend and fellow mathematician, John T. Graves and this letter was later published in the London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, vol. xxv, pp 489–95. In the letter, Hamilton states, And here there dawned on me the notion that we must admit, in some sense, an electric circuit seemed to close, and a spark flashed forth. Hamilton called a quadruple with these rules of multiplication a quaternion, Hamiltons treatment is more geometric than the modern approach, which emphasizes quaternions algebraic properties
66.
Absolute value
–
In mathematics, the absolute value or modulus |x| of a real number x is the non-negative value of x without regard to its sign. Namely, |x| = x for a x, |x| = −x for a negative x. For example, the value of 3 is 3. The absolute value of a number may be thought of as its distance from zero, generalisations of the absolute value for real numbers occur in a wide variety of mathematical settings. For example, a value is also defined for the complex numbers. The absolute value is related to the notions of magnitude, distance. The term absolute value has been used in this sense from at least 1806 in French and 1857 in English, the notation |x|, with a vertical bar on each side, was introduced by Karl Weierstrass in 1841. Other names for absolute value include numerical value and magnitude, in programming languages and computational software packages, the absolute value of x is generally represented by abs, or a similar expression. Thus, care must be taken to interpret vertical bars as an absolute value sign only when the argument is an object for which the notion of an absolute value is defined. For any real number x the value or modulus of x is denoted by |x| and is defined as | x | = { x, if x ≥0 − x. As can be seen from the definition, the absolute value of x is always either positive or zero. Indeed, the notion of a distance function in mathematics can be seen to be a generalisation of the absolute value of the difference. Since the square root notation without sign represents the square root. This identity is used as a definition of absolute value of real numbers. The absolute value has the four fundamental properties, The properties given by equations - are readily apparent from the definition. To see that equation holds, choose ε from so that ε ≥0, some additional useful properties are given below. These properties are either implied by or equivalent to the properties given by equations -, for example, Absolute value is used to define the absolute difference, the standard metric on the real numbers. Since the complex numbers are not ordered, the definition given above for the absolute value cannot be directly generalised for a complex number
67.
Computer graphics
–
Computer graphics are pictures and films created using computers. Usually, the term refers to computer-generated image data created with help from specialized hardware and software. It is a vast and recent area in computer science, the phrase was coined in 1960, by computer graphics researchers Verne Hudson and William Fetter of Boeing. It is often abbreviated as CG, though sometimes referred to as CGI. The overall methodology depends heavily on the sciences of geometry, optics. Computer graphics is responsible for displaying art and image data effectively and meaningfully to the user and it is also used for processing image data received from the physical world. Computer graphic development has had a significant impact on many types of media and has revolutionized animation, movies, advertising, video games, the term computer graphics has been used a broad sense to describe almost everything on computers that is not text or sound. Such imagery is found in and on television, newspapers, weather reports, a well-constructed graph can present complex statistics in a form that is easier to understand and interpret. In the media such graphs are used to illustrate papers, reports, thesis, many tools have been developed to visualize data. Computer generated imagery can be categorized into different types, two dimensional, three dimensional, and animated graphics. As technology has improved, 3D computer graphics have become more common, Computer graphics has emerged as a sub-field of computer science which studies methods for digitally synthesizing and manipulating visual content. Screens could display art since the Lumiere brothers use of mattes to create effects for the earliest films dating from 1895. New kinds of displays were needed to process the wealth of information resulting from such projects, early projects like the Whirlwind and SAGE Projects introduced the CRT as a viable display and interaction interface and introduced the light pen as an input device. Douglas T. Ross of the Whirlwind SAGE system performed an experiment in 1954 in which a small program he wrote captured the movement of his finger. Electronics pioneer Hewlett-Packard went public in 1957 after incorporating the decade prior, and established ties with Stanford University through its founders. This began the transformation of the southern San Francisco Bay Area into the worlds leading computer technology hub - now known as Silicon Valley. The field of computer graphics developed with the emergence of computer graphics hardware, further advances in computing led to greater advancements in interactive computer graphics. In 1959, the TX-2 computer was developed at MITs Lincoln Laboratory, the TX-2 integrated a number of new man-machine interfaces
68.
Quaternions and spatial rotation
–
Unit quaternions, also known as versors, provide a convenient mathematical notation for representing orientations and rotations of objects in three dimensions. Compared to Euler angles they are simpler to compose and avoid the problem of gimbal lock, compared to rotation matrices they are more compact, more numerically stable, and may be more efficient. Quaternions have applications in graphics, computer vision, robotics, navigation, molecular dynamics, flight dynamics, orbital mechanics of satellites. When used to represent rotation, unit quaternions are also called rotation quaternions, when used to represent an orientation, they are called orientation quaternions or attitude quaternions. The Euler axis is represented by a unit vector u→. Therefore. A Euclidean vector such as or can be rewritten as 2 i + 3 j + 4 k or ax i + ay j + az k, where i, j, k are unit vectors representing the three Cartesian axes. A rotation through an angle of θ around the axis defined by a vector u → = = u x i + u y j + u z k can be represented by a quaternion. In a programmatic implementation, this is achieved by constructing a quaternion whose vector part is p and real part equals zero, the vector part of the resulting quaternion is the desired vector p. The rotation is clockwise if our line of points in the same direction as u→. In this instance, q is a unit quaternion and q −1 = e − θ2 = cos θ2 − sin θ2, the scalar component of the result is necessarily zero. The quaternion inverse of a rotation is the rotation, since q −1 q = v →. The square of a rotation is a rotation by twice the angle around the same axis. More generally qn is a rotation by n times the angle around the axis as q. This can be extended to arbitrary real n, allowing for smooth interpolation between spatial orientations, see Slerp. Two rotation quaternions can be combined into one equivalent quaternion by the relation, thus, an arbitrary number of rotations can be composed together and then applied as a single rotation. Conjugating p by q refers to the operation p ↦ q p q−1, consider the rotation f around the axis v → = i + j + k, with a rotation angle of 120°, or 2π/3 radians. α =2 π3 The length of v→ is √3, the angle is π/3 with cosine 1/2. Lets show how we reached the previous result, as we can see, such computations are relatively long and tedious if done manually, however, in a computer program, this amounts to calling the quaternion multiplication routine twice
69.
Surjective
–
It is not required that x is unique, the function f may map one or more elements of X to the same element of Y. The French prefix sur means over or above and relates to the fact that the image of the domain of a surjective function completely covers the functions codomain, any function induces a surjection by restricting its codomain to its range. Every surjective function has an inverse, and every function with a right inverse is necessarily a surjection. The composite of surjective functions is always surjective, any function can be decomposed into a surjection and an injection. A surjective function is a function whose image is equal to its codomain, equivalently, a function f with domain X and codomain Y is surjective if for every y in Y there exists at least one x in X with f = y. Surjections are sometimes denoted by a two-headed rightwards arrow, as in f, X ↠ Y, symbolically, If f, X → Y, then f is said to be surjective if ∀ y ∈ Y, ∃ x ∈ X, f = y. For any set X, the identity function idX on X is surjective, the function f, Z → defined by f = n mod 2 is surjective. The function f, R → R defined by f = 2x +1 is surjective, because for every real number y we have an x such that f = y, an appropriate x is /2. However, this function is not injective since e. g. the pre-image of y =2 is, the function g, R → R defined by g = x2 is not surjective, because there is no real number x such that x2 = −1. However, the g, R → R0+ defined by g = x2 is surjective because for every y in the nonnegative real codomain Y there is at least one x in the real domain X such that x2 = y. The natural logarithm ln, → R is a surjective. Its inverse, the function, is not surjective as its range is the set of positive real numbers. The matrix exponential is not surjective when seen as a map from the space of all n×n matrices to itself. It is, however, usually defined as a map from the space of all n×n matrices to the linear group of degree n, i. e. the group of all n×n invertible matrices. Under this definition the matrix exponential is surjective for complex matrices, the projection from a cartesian product A × B to one of its factors is surjective unless the other factor is empty. In a 3D video game vectors are projected onto a 2D flat screen by means of a surjective function, a function is bijective if and only if it is both surjective and injective. If a function is identified with its graph, then surjectivity is not a property of the function itself, unlike injectivity, surjectivity cannot be read off of the graph of the function alone. The function g, Y → X is said to be an inverse of the function f, X → Y if f = y for every y in Y
70.
Homomorphism
–
In algebra, a homomorphism is a structure-preserving map between two algebraic structures of the same type. The word homomorphism comes from the ancient Greek language, ὁμός meaning same, homomorphisms of vector spaces are also called linear maps, and their study is the object of linear algebra. The concept of homomorphism has been generalized, under the name of morphism, to other structures that either do not have a underlying set. This generalization is the point of category theory. Being an isomorphism, an automorphism, or an endomorphism is a property of some homomorphisms, a homomorphism is a map between two algebraic structures of the same type, that preserves the operations of the structures. One says often that f preserves the operation or is compatible with the operation, formally, a map f, A → B preserves an operation μ of arity k, defined on both A and B if f = μ B, for all elements a1. For example, A semigroup homomorphism is a map between semigroups that preserves the semigroup operation, a monoid homomorphism is a map between monoids that preserves the monoid operation and maps the identity element of the first monoid to that of the second monoid. A group homomorphism is a map between groups that preserves the group operation, thus a semigroup homomorphism between groups is necessarily a group homomorphism. A ring homomorphism is a map between rings that preserves the ring addition, the multiplication, and the multiplicative identity. Whether the multiplicative identity is to be preserved depends upon the definition of ring in use, if the multiplicative identity is not preserved, one has a rng homomorphism. A linear map is a homomorphism of vector space, That is a homomorphism between vector spaces that preserves the abelian group structure and scalar multiplication. A module homomorphism, also called a map between modules, is defined similarly. An algebra homomorphism is a map that preserves the algebra operations, an algebraic structure may have more than one operation, and a homomorphism is required to preserve each operation. Thus a map that preserves some of the operations is not a homomorphism of the structure. For example, a map between monoids that preserves the operation and not the identity element, is not a monoid homomorphism. The notation for the operations does not need to be the same in the source, for example, the real numbers form a group for addition, and the positive real numbers form a group for multiplication. The exponential function x ↦ e x e x + y = e x e y. It is even an isomorphism, as its function, the natural logarithm, satisfies ln = ln + ln
71.
Covering map
–
In this case, C is called a covering space and X the base space of the covering projection. The definition implies that every covering map is a local homeomorphism, Covering spaces play an important role in homotopy theory, harmonic analysis, Riemannian geometry and differential topology. In Riemannian geometry for example, ramification is a generalization of the notion of covering maps, Covering spaces are also deeply intertwined with the study of homotopy groups and, in particular, the fundamental group. Let X be a topological space, the map p is called the covering map, the space X is often called the base space of the covering, and the space C is called the total space of the covering. For any point x in the base the inverse image of x in C is necessarily a discrete space called the fiber over x, the special open neighborhoods U of x given in the definition are called evenly covered neighborhoods. The evenly covered neighborhoods form an open cover of the space X, the homeomorphic copies in C of an evenly covered neighborhood U are called the sheets over U. In particular, covering maps are locally trivial, many authors impose some connectivity conditions on the spaces X and C in the definition of a covering map. In particular, many authors require both spaces to be path-connected and locally path-connected and this can prove helpful because many theorems hold only if the spaces in question have these properties. Some authors omit the assumption of surjectivity, for if X is connected, a connected and locally path-connected topological space X has a universal cover if and only if it is semi-locally simply connected. ℝ is the cover of the unit circle S1. The spin group Spin is a cover of the special orthogonal group. The accidental, or exceptional isomorphisms for Lie groups then give isomorphisms between spin groups in low dimension and classical Lie groups, the unitary group U has universal cover SU × ℝ. The n-sphere Sn is a cover of real projective space RPn and is a universal cover for n >1. Every manifold has a double cover that is connected if. The uniformization theorem asserts that every Riemann surface has a universal cover conformally equivalent to the Riemann sphere, the universal cover of a wedge of n circles is the Cayley graph of the free group on n generators, i. e. a Bethe lattice. The torus is a cover of the Klein bottle. Every graph has a double cover. Since every graph is homotopic to a wedge of circles, its cover is a Cayley graph
72.
Stereographic projection
–
In geometry, the stereographic projection is a particular mapping that projects a sphere onto a plane. The projection is defined on the sphere, except at one point. Where it is defined, the mapping is smooth and bijective and it is conformal, meaning that it preserves angles. It is neither isometric nor area-preserving, that is, it preserves neither distances nor the areas of figures, intuitively, then, the stereographic projection is a way of picturing the sphere as the plane, with some inevitable compromises. In practice, the projection is carried out by computer or by using a special kind of graph paper called a stereographic net, shortened to stereonet. The stereographic projection was known to Hipparchus, Ptolemy and probably earlier to the Egyptians and it was originally known as the planisphere projection. Planisphaerium by Ptolemy is the oldest surviving document that describes it, one of its most important uses was the representation of celestial charts. The term planisphere is still used to refer to such charts, in the 16th and 17th century, the equatorial aspect of the stereographic projection was commonly used for maps of the Eastern and Western Hemispheres. It is believed that already the map created in 1507 by Gualterius Lud was in stereographic projection, as were later the maps of Jean Roze, Rumold Mercator, in star charts, even this equatorial aspect had been utilised already by the ancient astronomers like Ptolemy. François dAguilon gave the stereographic projection its current name in his 1613 work Opticorum libri sex philosophis juxta ac mathematicis utiles, in 1695, Edmond Halley, motivated by his interest in star charts, published the first mathematical proof that this map is conformal. He used the recently established tools of calculus, invented by his friend Isaac Newton and this section focuses on the projection of the unit sphere from the north pole onto the plane through the equator. Other formulations are treated in later sections, the unit sphere in three-dimensional space R3 is the set of points such that x2 + y2 + z2 =1. Let N = be the pole, and let M be the rest of the sphere. The plane z =0 runs through the center of the sphere, for any point P on M, there is a unique line through N and P, and this line intersects the plane z =0 in exactly one point P′. Define the stereographic projection of P to be this point P′ in the plane, in Cartesian coordinates on the sphere and on the plane, the projection and its inverse are given by the formulas =, =. In spherical coordinates on the sphere and polar coordinates on the plane, here, φ is understood to have value π when R =0. Also, there are ways to rewrite these formulas using trigonometric identities. In cylindrical coordinates on the sphere and polar coordinates on the plane, the projection is not defined at the projection point N =
73.
Euler angles
–
The Euler angles are three angles introduced by Leonhard Euler to describe the orientation of a rigid body with respect to a fixed coordinate system. They can also represent the orientation of a frame of reference in physics or the orientation of a general basis in 3-dimensional linear algebra. Any orientation can be achieved by composing three elemental rotations, i. e. rotations about the axes of a coordinate system, Euler angles can be defined by three of these rotations. They can also be defined by geometry and the geometrical definition demonstrates that three rotations are always sufficient to reach any frame. The three elemental rotations may be extrinsic, or intrinsic, Euler angles are typically denoted as α, β, γ, or φ, θ, ψ. Different authors may use different sets of rotation axes to define Euler angles, therefore, any discussion employing Euler angles should always be preceded by their definition. Tait–Bryan angles are also called Cardan angles, nautical angles, heading, elevation, and bank, or yaw, pitch, sometimes, both kinds of sequences are called Euler angles. In that case, the sequences of the first group are called proper or classic Euler angles, the axes of the original frame are denoted as x, y, z and the axes of the rotated frame are denoted as X, Y, Z. The geometrical definition begins defining the line of nodes as the intersection of the planes xy, using it, the three Euler angles can be defined as follows, α is the angle between the x axis and the N axis. β is the angle between the z axis and the Z axis, γ is the angle between the N axis and the X axis. Euler angles between two frames are defined only if both frames have the same handedness. Intrinsic rotations are elemental rotations occur about the axes of a coordinate system XYZ attached to a moving body. Therefore, they change their orientation after each elemental rotation, the XYZ system rotates, while xyz is fixed. Starting with XYZ overlapping xyz, a composition of three intrinsic rotations can be used to any target orientation for XYZ. Euler angles can be defined by intrinsic rotations, the rotated frame XYZ may be imagined to be initially aligned with xyz, before undergoing the three elemental rotations represented by Euler angles. Hence, N can be simply denoted x’, moreover, since the third elemental rotation occurs about Z, it does not change the orientation of Z. Extrinsic rotations are elemental rotations occur about the axes of the fixed coordinate system xyz. The XYZ system rotates, while xyz is fixed, starting with XYZ overlapping xyz, a composition of three extrinsic rotations can be used to reach any target orientation for XYZ
74.
Group homomorphism
–
From this property, one can deduce that h maps the identity element eG of G to the identity element eH of H, and it also maps inverses to inverses in the sense that h = h −1. Hence one can say that h is compatible with the group structure, older notations for the homomorphism h may be xh, though this may be confused as an index or a general subscript. A more recent trend is to write group homomorphisms on the right of their arguments, omitting brackets and this approach is especially prevalent in areas of group theory where automata play a role, since it accords better with the convention that automata read words from left to right. In areas of mathematics where one considers groups endowed with additional structure, for example, a homomorphism of topological groups is often required to be continuous. The purpose of defining a group homomorphism is to create functions that preserve the algebraic structure, an equivalent definition of group homomorphism is, The function h, G → H is a group homomorphism if whenever a ∗ b = c we have h ⋅ h = h. In other words, the group H in some sense has an algebraic structure as G. Monomorphism A group homomorphism that is injective, i. e. preserves distinctness, epimorphism A group homomorphism that is surjective, i. e. reaches every point in the codomain. Isomorphism A group homomorphism that is bijective, i. e. injective and surjective and its inverse is also a group homomorphism. In this case, the groups G and H are called isomorphic, endomorphism A homomorphism, h, G → G, the domain and codomain are the same. Also called an endomorphism of G. Automorphism An endomorphism that is bijective, the set of all automorphisms of a group G, with functional composition as operation, forms itself a group, the automorphism group of G. As an example, the group of contains only two elements, the identity transformation and multiplication with −1, it is isomorphic to Z/2Z. We define the kernel of h to be the set of elements in G which are mapped to the identity in H ker ≡. the kernel and image of a homomorphism can be interpreted as measuring how close it is to being an isomorphism. The first isomorphism theorem states that the image of a group homomorphism, if and only if ker =, the homomorphism, h, is a group monomorphism, i. e. h is injective. The map h, Z → Z/3Z with h = u mod 3 is a group homomorphism and it is surjective and its kernel consists of all integers which are divisible by 3. The exponential map yields a homomorphism from the group of real numbers R with addition to the group of non-zero real numbers R* with multiplication. The kernel is and the image consists of the real numbers. The exponential map yields a group homomorphism from the group of complex numbers C with addition to the group of non-zero complex numbers C* with multiplication. This map is surjective and has the kernel, as can be seen from Eulers formula, fields like R and C that have homomorphisms from their additive group to their multiplicative group are thus called exponential fields
75.
Universal covering space
–
In this case, C is called a covering space and X the base space of the covering projection. The definition implies that every covering map is a local homeomorphism, Covering spaces play an important role in homotopy theory, harmonic analysis, Riemannian geometry and differential topology. In Riemannian geometry for example, ramification is a generalization of the notion of covering maps, Covering spaces are also deeply intertwined with the study of homotopy groups and, in particular, the fundamental group. Let X be a topological space, the map p is called the covering map, the space X is often called the base space of the covering, and the space C is called the total space of the covering. For any point x in the base the inverse image of x in C is necessarily a discrete space called the fiber over x, the special open neighborhoods U of x given in the definition are called evenly covered neighborhoods. The evenly covered neighborhoods form an open cover of the space X, the homeomorphic copies in C of an evenly covered neighborhood U are called the sheets over U. In particular, covering maps are locally trivial, many authors impose some connectivity conditions on the spaces X and C in the definition of a covering map. In particular, many authors require both spaces to be path-connected and locally path-connected and this can prove helpful because many theorems hold only if the spaces in question have these properties. Some authors omit the assumption of surjectivity, for if X is connected, a connected and locally path-connected topological space X has a universal cover if and only if it is semi-locally simply connected. ℝ is the cover of the unit circle S1. The spin group Spin is a cover of the special orthogonal group. The accidental, or exceptional isomorphisms for Lie groups then give isomorphisms between spin groups in low dimension and classical Lie groups, the unitary group U has universal cover SU × ℝ. The n-sphere Sn is a cover of real projective space RPn and is a universal cover for n >1. Every manifold has a double cover that is connected if. The uniformization theorem asserts that every Riemann surface has a universal cover conformally equivalent to the Riemann sphere, the universal cover of a wedge of n circles is the Cayley graph of the free group on n generators, i. e. a Bethe lattice. The torus is a cover of the Klein bottle. Every graph has a double cover. Since every graph is homotopic to a wedge of circles, its cover is a Cayley graph