1.
Special orthogonal group
–
Equivalently, it is the group of n×n orthogonal matrices, where the group operation is given by matrix multiplication, an orthogonal matrix is a real matrix whose inverse equals its transpose. An important subgroup of O is the orthogonal group, denoted SO. This group is called the rotation group, because, in dimensions 2 and 3. In low dimension, these groups have been studied, see SO, SO and SO. This is a subgroup of the linear group GL given by O = where QT is the transpose of Q and I is the identity matrix. This article mainly discusses the groups of quadratic forms that may be expressed over some bases as the dot product, over the reals. Over the reals, for any quadratic form, there is a basis. Thus the orthogonal group depends only on the numbers of 1 and of −1, and is denoted O, for details, see indefinite orthogonal group. The derived subgroup Ω of O is an often studied object because, the Cartan–Dieudonné theorem describes the structure of the orthogonal group for a non-singular form. The determinant of any orthogonal matrix is either 1 or −1, the orthogonal n-by-n matrices with determinant 1 form a normal subgroup of O known as the special orthogonal group SO, consisting of all proper rotations. By analogy with GL–SL, the group is sometimes called the general orthogonal group and denoted GO. The term rotation group can be used to either the special or general orthogonal group. When this distinction is to be emphasized, the groups may be denoted O and O, reserving n for the dimension of the space. The letters p or r are also used, indicating the rank of the corresponding Lie algebra, in odd dimension the corresponding Lie algebra is s o, while in even dimension the Lie algebra is s o. In two dimensions, O is the group of all rotations about the origin and all reflections along a line through the origin, SO is the group of all rotations about the origin. These groups are related, SO is a subgroup of O of index 2. More generally, in any number of dimensions an even number of reflections gives a rotation, therefore, the rotations define a subgroup of O, but the reflections do not define a subgroup. A reflection through the origin may be generated as a combination of one reflection along each of the axes, the reflection through the origin is not a reflection in the usual sense in even dimensions, but rather a rotation
2.
Classical mechanics
–
In physics, classical mechanics is one of the two major sub-fields of mechanics, along with quantum mechanics. Classical mechanics is concerned with the set of physical laws describing the motion of bodies under the influence of a system of forces. The study of the motion of bodies is an ancient one, making classical mechanics one of the oldest and largest subjects in science, engineering and technology. Classical mechanics describes the motion of objects, from projectiles to parts of machinery, as well as astronomical objects, such as spacecraft, planets, stars. Within classical mechanics are fields of study that describe the behavior of solids, liquids and gases, Classical mechanics also provides extremely accurate results as long as the domain of study is restricted to large objects and the speeds involved do not approach the speed of light. When both quantum and classical mechanics cannot apply, such as at the level with high speeds. Since these aspects of physics were developed long before the emergence of quantum physics and relativity, however, a number of modern sources do include relativistic mechanics, which in their view represents classical mechanics in its most developed and accurate form. Later, more abstract and general methods were developed, leading to reformulations of classical mechanics known as Lagrangian mechanics and these advances were largely made in the 18th and 19th centuries, and they extend substantially beyond Newtons work, particularly through their use of analytical mechanics. The following introduces the concepts of classical mechanics. For simplicity, it often models real-world objects as point particles, the motion of a point particle is characterized by a small number of parameters, its position, mass, and the forces applied to it. Each of these parameters is discussed in turn, in reality, the kind of objects that classical mechanics can describe always have a non-zero size. Objects with non-zero size have more complicated behavior than hypothetical point particles, because of the degrees of freedom. However, the results for point particles can be used to such objects by treating them as composite objects. The center of mass of a composite object behaves like a point particle, Classical mechanics uses common-sense notions of how matter and forces exist and interact. It assumes that matter and energy have definite, knowable attributes such as where an object is in space, non-relativistic mechanics also assumes that forces act instantaneously. The position of a point particle is defined with respect to a fixed reference point in space called the origin O, in space. A simple coordinate system might describe the position of a point P by means of a designated as r. In general, the point particle need not be stationary relative to O, such that r is a function of t, the time
3.
Geometry
–
Geometry is a branch of mathematics concerned with questions of shape, size, relative position of figures, and the properties of space. A mathematician who works in the field of geometry is called a geometer, Geometry arose independently in a number of early cultures as a practical way for dealing with lengths, areas, and volumes. Geometry began to see elements of mathematical science emerging in the West as early as the 6th century BC. By the 3rd century BC, geometry was put into a form by Euclid, whose treatment, Euclids Elements. Geometry arose independently in India, with texts providing rules for geometric constructions appearing as early as the 3rd century BC, islamic scientists preserved Greek ideas and expanded on them during the Middle Ages. By the early 17th century, geometry had been put on a solid footing by mathematicians such as René Descartes. Since then, and into modern times, geometry has expanded into non-Euclidean geometry and manifolds, while geometry has evolved significantly throughout the years, there are some general concepts that are more or less fundamental to geometry. These include the concepts of points, lines, planes, surfaces, angles, contemporary geometry has many subfields, Euclidean geometry is geometry in its classical sense. The mandatory educational curriculum of the majority of nations includes the study of points, lines, planes, angles, triangles, congruence, similarity, solid figures, circles, Euclidean geometry also has applications in computer science, crystallography, and various branches of modern mathematics. Differential geometry uses techniques of calculus and linear algebra to problems in geometry. It has applications in physics, including in general relativity, topology is the field concerned with the properties of geometric objects that are unchanged by continuous mappings. In practice, this often means dealing with large-scale properties of spaces, convex geometry investigates convex shapes in the Euclidean space and its more abstract analogues, often using techniques of real analysis. It has close connections to convex analysis, optimization and functional analysis, algebraic geometry studies geometry through the use of multivariate polynomials and other algebraic techniques. It has applications in areas, including cryptography and string theory. Discrete geometry is concerned mainly with questions of relative position of simple objects, such as points. It shares many methods and principles with combinatorics, Geometry has applications to many fields, including art, architecture, physics, as well as to other branches of mathematics. The earliest recorded beginnings of geometry can be traced to ancient Mesopotamia, the earliest known texts on geometry are the Egyptian Rhind Papyrus and Moscow Papyrus, the Babylonian clay tablets such as Plimpton 322. For example, the Moscow Papyrus gives a formula for calculating the volume of a truncated pyramid, later clay tablets demonstrate that Babylonian astronomers implemented trapezoid procedures for computing Jupiters position and motion within time-velocity space
4.
Group (mathematics)
–
In mathematics, a group is an algebraic structure consisting of a set of elements equipped with an operation that combines any two elements to form a third element. The operation satisfies four conditions called the group axioms, namely closure and it allows entities with highly diverse mathematical origins in abstract algebra and beyond to be handled in a flexible way while retaining their essential structural aspects. The ubiquity of groups in areas within and outside mathematics makes them a central organizing principle of contemporary mathematics. Groups share a kinship with the notion of symmetry. The concept of a group arose from the study of polynomial equations, after contributions from other fields such as number theory and geometry, the group notion was generalized and firmly established around 1870. Modern group theory—an active mathematical discipline—studies groups in their own right, to explore groups, mathematicians have devised various notions to break groups into smaller, better-understandable pieces, such as subgroups, quotient groups and simple groups. A theory has developed for finite groups, which culminated with the classification of finite simple groups. Since the mid-1980s, geometric group theory, which studies finitely generated groups as objects, has become a particularly active area in group theory. One of the most familiar groups is the set of integers Z which consists of the numbers, −4, −3, −2, −1,0,1,2,3,4. The following properties of integer addition serve as a model for the group axioms given in the definition below. For any two integers a and b, the sum a + b is also an integer and that is, addition of integers always yields an integer. This property is known as closure under addition, for all integers a, b and c, + c = a +. Expressed in words, adding a to b first, and then adding the result to c gives the final result as adding a to the sum of b and c. If a is any integer, then 0 + a = a +0 = a, zero is called the identity element of addition because adding it to any integer returns the same integer. For every integer a, there is a b such that a + b = b + a =0. The integer b is called the element of the integer a and is denoted −a. The integers, together with the operation +, form a mathematical object belonging to a class sharing similar structural aspects. To appropriately understand these structures as a collective, the abstract definition is developed
5.
Rotation
–
A rotation is a circular movement of an object around a center of rotation. A three-dimensional object always rotates around a line called a rotation axis. If the axis passes through the center of mass, the body is said to rotate upon itself. A rotation about a point, e. g. the Earth about the Sun, is called a revolution or orbital revolution. The axis is called a pole, mathematically, a rotation is a rigid body movement which, unlike a translation, keeps a point fixed. This definition applies to rotations within both two and three dimensions All rigid body movements are rotations, translations, or combinations of the two, a rotation is simply a progressive radial orientation to a common point. That common point lies within the axis of that motion, the axis is 90 degrees perpendicular to the plane of the motion. If the axis of the rotation lies external of the body in question then the body is said to orbit, there is no fundamental difference between a “rotation” and an “orbit” and or spin. The key distinction is simply where the axis of the rotation lies and this distinction can be demonstrated for both “rigid” and “non rigid” bodies. If a rotation around a point or axis is followed by a rotation around the same point/axis. The reverse of a rotation is also a rotation, thus, the rotations around a point/axis form a group. However, a rotation around a point or axis and a rotation around a different point/axis may result in something other than a rotation, Rotations around the x, y and z axes are called principal rotations. Rotation around any axis can be performed by taking a rotation around the x axis, followed by a rotation around the y axis and that is to say, any spatial rotation can be decomposed into a combination of principal rotations. In flight dynamics, the rotations are known as yaw, pitch. This terminology is used in computer graphics. In astronomy, rotation is an observed phenomenon. Stars, planets and similar bodies all spin around on their axes, the rotation rate of planets in the solar system was first measured by tracking visual features. Stellar rotation is measured through Doppler shift or by tracking active surface features and this rotation induces a centrifugal acceleration in the reference frame of the Earth which slightly counteracts the effect of gravity the closer one is to the equator
6.
Origin (mathematics)
–
In mathematics, the origin of a Euclidean space is a special point, usually denoted by the letter O, used as a fixed point of reference for the geometry of the surrounding space. In physical problems, the choice of origin is often arbitrary and this allows one to pick an origin point that makes the mathematics as simple as possible, often by taking advantage of some kind of geometric symmetry. In a Cartesian coordinate system, the origin is the point where the axes of the system intersect, the origin divides each of these axes into two halves, a positive and a negative semiaxis. The coordinates of the origin are all zero, for example in two dimensions and in three. In a polar coordinate system, the origin may also be called the pole, in Euclidean geometry, the origin may be chosen freely as any convenient point of reference. The origin of the plane can be referred as the point where real axis. In other words, it is the number zero
7.
Three-dimensional space
–
Three-dimensional space is a geometric setting in which three values are required to determine the position of an element. This is the meaning of the term dimension. In physics and mathematics, a sequence of n numbers can be understood as a location in n-dimensional space, when n =3, the set of all such locations is called three-dimensional Euclidean space. It is commonly represented by the symbol ℝ3 and this serves as a three-parameter model of the physical universe in which all known matter exists. However, this space is one example of a large variety of spaces in three dimensions called 3-manifolds. Furthermore, in case, these three values can be labeled by any combination of three chosen from the terms width, height, depth, and breadth. In mathematics, analytic geometry describes every point in space by means of three coordinates. Three coordinate axes are given, each perpendicular to the two at the origin, the point at which they cross. They are usually labeled x, y, and z, below are images of the above-mentioned systems. Two distinct points determine a line. Three distinct points are either collinear or determine a unique plane, four distinct points can either be collinear, coplanar or determine the entire space. Two distinct lines can intersect, be parallel or be skew. Two parallel lines, or two intersecting lines, lie in a plane, so skew lines are lines that do not meet. Two distinct planes can either meet in a line or are parallel. Three distinct planes, no pair of which are parallel, can meet in a common line. In the last case, the three lines of intersection of each pair of planes are mutually parallel, a line can lie in a given plane, intersect that plane in a unique point or be parallel to the plane. In the last case, there will be lines in the plane that are parallel to the given line, a hyperplane is a subspace of one dimension less than the dimension of the full space. The hyperplanes of a space are the two-dimensional subspaces, that is
8.
Euclidean space
–
In geometry, Euclidean space encompasses the two-dimensional Euclidean plane, the three-dimensional space of Euclidean geometry, and certain other spaces. It is named after the Ancient Greek mathematician Euclid of Alexandria, the term Euclidean distinguishes these spaces from other types of spaces considered in modern geometry. Euclidean spaces also generalize to higher dimensions, classical Greek geometry defined the Euclidean plane and Euclidean three-dimensional space using certain postulates, while the other properties of these spaces were deduced as theorems. Geometric constructions are used to define rational numbers. It means that points of the space are specified with collections of real numbers and this approach brings the tools of algebra and calculus to bear on questions of geometry and has the advantage that it generalizes easily to Euclidean spaces of more than three dimensions. From the modern viewpoint, there is only one Euclidean space of each dimension. With Cartesian coordinates it is modelled by the coordinate space of the same dimension. In one dimension, this is the line, in two dimensions, it is the Cartesian plane, and in higher dimensions it is a coordinate space with three or more real number coordinates. One way to think of the Euclidean plane is as a set of points satisfying certain relationships, expressible in terms of distance, for example, there are two fundamental operations on the plane. One is translation, which means a shifting of the plane so that point is shifted in the same direction. The other is rotation about a point in the plane. In order to all of this mathematically precise, the theory must clearly define the notions of distance, angle, translation. Even when used in theories, Euclidean space is an abstraction detached from actual physical locations, specific reference frames, measurement instruments. The standard way to such space, as carried out in the remainder of this article, is to define the Euclidean plane as a two-dimensional real vector space equipped with an inner product. The reason for working with vector spaces instead of Rn is that it is often preferable to work in a coordinate-free manner. Once the Euclidean plane has been described in language, it is actually a simple matter to extend its concept to arbitrary dimensions. For the most part, the vocabulary, formulae, and calculations are not made any more difficult by the presence of more dimensions. Intuitively, the distinction says merely that there is no choice of where the origin should go in the space
9.
Function composition
–
In mathematics, function composition is the pointwise application of one function to the result of another to produce a third function. The resulting composite function is denoted g ∘ f, X → Z, the notation g ∘ f is read as g circle f, or g round f, or g composed with f, g after f, g following f, or g of f, or g on f. Intuitively, composing two functions is a process in which the output of the inner function becomes the input of the outer function. The composition of functions is a case of the composition of relations. The composition of functions has some additional properties, Composition of functions on a finite set, If f =, and g =, then g ∘ f =. The composition of functions is always associative—a property inherited from the composition of relations, since there is no distinction between the choices of placement of parentheses, they may be left off without causing any ambiguity. In a strict sense, the composition g ∘ f can be only if fs codomain equals gs domain, in a wider sense it is sufficient that the former is a subset of the latter. The functions g and f are said to commute with each other if g ∘ f = f ∘ g, commutativity is a special property, attained only by particular functions, and often in special circumstances. For example, | x | +3 = | x + 3 | only when x ≥0, the composition of one-to-one functions is always one-to-one. Similarly, the composition of two functions is always onto. It follows that composition of two bijections is also a bijection, the inverse function of a composition has the property that −1 =. Derivatives of compositions involving differentiable functions can be using the chain rule. Higher derivatives of functions are given by Faà di Brunos formula. Suppose one has two functions f, X → X, g, X → X having the domain and codomain. Then one can form chains of transformations composed together, such as f ∘ f ∘ g ∘ f, such chains have the algebraic structure of a monoid, called a transformation monoid or composition monoid. In general, transformation monoids can have remarkably complicated structure, one particular notable example is the de Rham curve. The set of all functions f, X → X is called the transformation semigroup or symmetric semigroup on X. If the transformation are bijective, then the set of all combinations of these functions forms a transformation group
10.
Isometry
–
In mathematics, an isometry is a distance-preserving transformation between metric spaces, usually assumed to be bijective. Isometries are often used in constructions where one space is embedded in another space, for instance, the completion of a metric space M involves an isometry from M into M, a quotient set of the space of Cauchy sequences on M. The original space M is thus isometrically isomorphic to a subspace of a metric space. An isometric surjective linear operator on a Hilbert space is called a unitary operator, let X and Y be metric spaces with metrics dX and dY. A map ƒ, X → Y is called an isometry or distance preserving if for any a, b ∈ X one has d Y = d X. An isometry is automatically injective, otherwise two points, a and b, could be mapped to the same point, thereby contradicting the coincidence axiom of the metric d. This proof is similar to the proof that an order embedding between partially ordered sets is injective, clearly, every isometry between metric spaces is a topological embedding. A global isometry, isometric isomorphism or congruence mapping is a bijective isometry, like any other bijection, a global isometry has a function inverse. The inverse of an isometry is also a global isometry. Two metric spaces X and Y are called if there is a bijective isometry from X to Y. The set of bijective isometries from a space to itself forms a group with respect to function composition. This term is often abridged to simply isometry, so one should care to determine from context which type is intended. Any reflection, translation and rotation is a global isometry on Euclidean spaces, the map x ↦ | x | in R is a path isometry but not an isometry. Note that unlike an isometry, it is not injective, the isometric linear maps from Cn to itself are given by the unitary matrices. Given two normed vector spaces V and W, an isometry is a linear map f, V → W that preserves the norms. Linear isometries are distance-preserving maps in the above sense and they are global isometries if and only if they are surjective. By the Mazur-Ulam theorem, any isometry of normed spaces over R is affine. Note that ε-isometries are not assumed to be continuous, the restricted isometry property characterizes nearly isometric matrices for sparse vectors
11.
Orientation (vector space)
–
In linear algebra, the notion of orientation makes sense in arbitrary finite dimension. In this setting, the orientation of a basis is a kind of asymmetry that makes a reflection impossible to replicate by means of a simple rotation. As a result, in the three-dimensional Euclidean space, the two possible basis orientations are called right-handed and left-handed, the orientation on a real vector space is the arbitrary choice of which ordered bases are positively oriented and which are negatively oriented. In the three-dimensional Euclidean space, right-handed bases are typically declared to be positively oriented, a vector space with an orientation selected is called an oriented vector space, while one not having an orientation selected, is called unoriented. Let V be a real vector space and let b1. It is a result in linear algebra that there exists a unique linear transformation A, V → V that takes b1 to b2. The bases b1 and b2 are said to have the same orientation if A has positive determinant, the property of having the same orientation defines an equivalence relation on the set of all ordered bases for V. If V is non-zero, there are two equivalence classes determined by this relation. An orientation on V is an assignment of +1 to one equivalence class, every ordered basis lives in one equivalence class or another. Thus any choice of an ordered basis for V determines an orientation. For example, the basis on Rn provides a standard orientation on Rn. Any choice of an isomorphism between V and Rn will then provide an orientation on V. The ordering of elements in a basis is crucial, two bases with a different ordering will differ by some permutation. They will have the same/opposite orientations according to whether the signature of this permutation is ±1 and this is because the determinant of a permutation matrix is equal to the signature of the associated permutation. Similarly, let A be a linear mapping of vector space Rn to Rn. This mapping is orientation-preserving if its determinant is positive, a zero-dimensional vector space has only a single point, the zero vector. Consequently, the basis of a zero-dimensional vector space is the empty set ∅. Therefore, there is an equivalence class of ordered bases, namely
12.
Inverse function
–
I. e. f = y if and only if g = x. As a simple example, consider the function of a real variable given by f = 5x −7. Thinking of this as a procedure, to reverse this and get x back from some output value, say y. In this case means that we should add 7 to y. In functional notation this inverse function would be given by, g = y +75, with y = 5x −7 we have that f = y and g = x. Not all functions have inverse functions, in order for a function f, X → Y to have an inverse, it must have the property that for every y in Y there must be one, and only one x in X so that f = y. This property ensures that a function g, Y → X will exist having the necessary relationship with f, let f be a function whose domain is the set X, and whose image is the set Y. Then f is invertible if there exists a g with domain Y and image X, with the property. If f is invertible, the g is unique, which means that there is exactly one function g satisfying this property. That function g is called the inverse of f, and is usually denoted as f −1. Stated otherwise, a function is invertible if and only if its inverse relation is a function on the range Y, not all functions have an inverse. For a function to have an inverse, each element y ∈ Y must correspond to no more than one x ∈ X, a function f with this property is called one-to-one or an injection. If f −1 is to be a function on Y, then each element y ∈ Y must correspond to some x ∈ X. Functions with this property are called surjections. This property is satisfied by definition if Y is the image of f, to be invertible a function must be both an injection and a surjection. If a function f is invertible, then both it and its inverse function f−1 are bijections, there is another convention used in the definition of functions. This can be referred to as the set-theoretic or graph definition using ordered pairs in which a codomain is never referred to, under this convention all functions are surjections, and so, being a bijection simply means being an injection. Authors using this convention may use the phrasing that a function is invertible if, the two conventions need not cause confusion as long as it is remembered that in this alternate convention the codomain of a function is always taken to be the range of the function. With this type of function it is impossible to deduce an input from its output, such a function is called non-injective or, in some applications, information-losing
13.
Identity function
–
In mathematics, an identity function, also called an identity relation or identity map or identity transformation, is a function that always returns the same value that was used as its argument. In equations, the function is given by f = x, formally, if M is a set, the identity function f on M is defined to be that function with domain and codomain M which satisfies f = x for all elements x in M. In other words, the value f in M is always the same input element x of M. The identity function on M is clearly a function as well as a surjective function. The identity function f on M is often denoted by idM, in set theory, where a function is defined as a particular kind of binary relation, the identity function is given by the identity relation, or diagonal of M. If f, M → N is any function, then we have f ∘ idM = f = idN ∘ f, in particular, idM is the identity element of the monoid of all functions from M to M. Since the identity element of a monoid is unique, one can define the identity function on M to be this identity element. Such a definition generalizes to the concept of an identity morphism in category theory, the identity function is a linear operator, when applied to vector spaces. The identity function on the integers is a completely multiplicative function. In an n-dimensional vector space the identity function is represented by the identity matrix In, in a metric space the identity is trivially an isometry. An object without any symmetry has as symmetry group the group only containing this isometry. In a topological space, the identity function is always continuous
14.
Associative property
–
In mathematics, the associative property is a property of some binary operations. In propositional logic, associativity is a rule of replacement for expressions in logical proofs. That is, rearranging the parentheses in such an expression will not change its value, consider the following equations, +4 =2 + =92 × = ×4 =24. Even though the parentheses were rearranged on each line, the values of the expressions were not altered, since this holds true when performing addition and multiplication on any real numbers, it can be said that addition and multiplication of real numbers are associative operations. Associativity is not to be confused with commutativity, which addresses whether or not the order of two operands changes the result. For example, the order doesnt matter in the multiplication of numbers, that is. Associative operations are abundant in mathematics, in fact, many algebraic structures explicitly require their binary operations to be associative, however, many important and interesting operations are non-associative, some examples include subtraction, exponentiation and the vector cross product. Z = x = xyz for all x, y, z in S, the associative law can also be expressed in functional notation thus, f = f. If a binary operation is associative, repeated application of the produces the same result regardless how valid pairs of parenthesis are inserted in the expression. This is called the generalized associative law, thus the product can be written unambiguously as abcd. As the number of elements increases, the number of ways to insert parentheses grows quickly. Some examples of associative operations include the following, the two methods produce the same result, string concatenation is associative. In arithmetic, addition and multiplication of numbers are associative, i. e. + z = x + = x + y + z z = x = x y z } for all x, y, z ∈ R. x, y, z\in \mathbb. }Because of associativity. Addition and multiplication of numbers and quaternions are associative. Addition of octonions is also associative, but multiplication of octonions is non-associative, the greatest common divisor and least common multiple functions act associatively. Gcd = gcd = gcd lcm = lcm = lcm } for all x, y, z ∈ Z. x, y, z\in \mathbb. }Taking the intersection or the union of sets, ∩ C = A ∩ = A ∩ B ∩ C ∪ C = A ∪ = A ∪ B ∪ C } for all sets A, B, C. Slightly more generally, given four sets M, N, P and Q, with h, M to N, g, N to P, in short, composition of maps is always associative. Consider a set with three elements, A, B, and C, thus, for example, A=C = A
15.
Manifold
–
In mathematics, a manifold is a topological space that locally resembles Euclidean space near each point. More precisely, each point of a manifold has a neighbourhood that is homeomorphic to the Euclidean space of dimension n. One-dimensional manifolds include lines and circles, but not figure eights, two-dimensional manifolds are also called surfaces. Although a manifold locally resembles Euclidean space, globally it may not, for example, the surface of the sphere is not a Euclidean space, but in a region it can be charted by means of map projections of the region into the Euclidean plane. When a region appears in two neighbouring charts, the two representations do not coincide exactly and a transformation is needed to pass from one to the other, Manifolds naturally arise as solution sets of systems of equations and as graphs of functions. One important class of manifolds is the class of differentiable manifolds and this differentiable structure allows calculus to be done on manifolds. A Riemannian metric on a manifold allows distances and angles to be measured, symplectic manifolds serve as the phase spaces in the Hamiltonian formalism of classical mechanics, while four-dimensional Lorentzian manifolds model spacetime in general relativity. After a line, the circle is the simplest example of a topological manifold, Topology ignores bending, so a small piece of a circle is treated exactly the same as a small piece of a line. Consider, for instance, the top part of the circle, x2 + y2 =1. Any point of this arc can be described by its x-coordinate. So, projection onto the first coordinate is a continuous, and invertible, mapping from the arc to the open interval. Such functions along with the regions they map are called charts. Similarly, there are charts for the bottom, left, and right parts of the circle, together, these parts cover the whole circle and the four charts form an atlas for the circle. The top and right charts, χtop and χright respectively, overlap in their domain, Each map this part into the interval, though differently. Let a be any number in, then, T = χ r i g h t = χ r i g h t =1 − a 2 Such a function is called a transition map. The top, bottom, left, and right charts show that the circle is a manifold, charts need not be geometric projections, and the number of charts is a matter of some choice. These two charts provide a second atlas for the circle, with t =1 s Each chart omits a single point, either for s or for t and it can be proved that it is not possible to cover the full circle with a single chart. Viewed using calculus, the transition function T is simply a function between open intervals, which gives a meaning to the statement that T is differentiable
16.
Smoothness
–
In mathematical analysis, the smoothness of a function is a property measured by the number of derivatives it has which are continuous. A smooth function is a function that has derivatives of all orders everywhere in its domain, differentiability class is a classification of functions according to the properties of their derivatives. Higher order differentiability classes correspond to the existence of more derivatives, consider an open set on the real line and a function f defined on that set with real values. Let k be a non-negative integer, the function f is said to be of class Ck if the derivatives f′, f′′. The function f is said to be of class C∞, or smooth, if it has derivatives of all orders. The function f is said to be of class Cω, or analytic, if f is smooth, Cω is thus strictly contained in C∞. Bump functions are examples of functions in C∞ but not in Cω, to put it differently, the class C0 consists of all continuous functions. The class C1 consists of all differentiable functions whose derivative is continuous, thus, a C1 function is exactly a function whose derivative exists and is of class C0. In particular, Ck is contained in Ck−1 for every k, C∞, the class of infinitely differentiable functions, is the intersection of the sets Ck as k varies over the non-negative integers. The function f = { x if x ≥0,0 if x <0 is continuous, because cos oscillates as x →0, f ’ is not continuous at zero. Therefore, this function is differentiable but not of class C1, the functions f = | x | k +1 where k is even, are continuous and k times differentiable at all x. But at x =0 they are not times differentiable, so they are of class Ck, the exponential function is analytic, so, of class Cω. The trigonometric functions are also analytic wherever they are defined, the function f is an example of a smooth function with compact support. Let n and m be some positive integers, if f is a function from an open subset of Rn with values in Rm, then f has component functions f1. Each of these may or may not have partial derivatives, the classes C∞ and Cω are defined as before. These criteria of differentiability can be applied to the functions of a differential structure. The resulting space is called a Ck manifold, if one wishes to start with a coordinate-independent definition of the class Ck, one may start by considering maps between Banach spaces. A map from one Banach space to another is differentiable at a point if there is a map which approximates it at that point
17.
Lie group
–
In mathematics, a Lie group /ˈliː/ is a group that is also a differentiable manifold, with the property that the group operations are compatible with the smooth structure. Lie groups are named after Sophus Lie, who laid the foundations of the theory of transformation groups. The term groupes de Lie first appeared in French in 1893 in the thesis of Lie’s student Arthur Tresse, an extension of Galois theory to the case of continuous symmetry groups was one of Lies principal motivations. Lie groups are smooth manifolds and as such can be studied using differential calculus. Lie groups play an role in modern geometry, on several different levels. Felix Klein argued in his Erlangen program that one can consider various geometries by specifying an appropriate transformation group that leaves certain geometric properties invariant and this idea later led to the notion of a G-structure, where G is a Lie group of local symmetries of a manifold. On a global level, whenever a Lie group acts on an object, such as a Riemannian or a symplectic manifold. The presence of continuous symmetries expressed via a Lie group action on a manifold places strong constraints on its geometry, Linear actions of Lie groups are especially important, and are studied in representation theory. This insight opened new possibilities in pure algebra, by providing a uniform construction for most finite simple groups, a real Lie group is a group that is also a finite-dimensional real smooth manifold, in which the group operations of multiplication and inversion are smooth maps. Smoothness of the group multiplication μ, G × G → G μ = x y means that μ is a mapping of the product manifold G×G into G. These two requirements can be combined to the requirement that the mapping ↦ x −1 y be a smooth mapping of the product manifold into G. The 2×2 real invertible matrices form a group under multiplication, denoted by GL or by GL2 and this is a four-dimensional noncompact real Lie group. This group is disconnected, it has two connected components corresponding to the positive and negative values of the determinant, the rotation matrices form a subgroup of GL, denoted by SO. It is a Lie group in its own right, specifically, using the rotation angle φ as a parameter, this group can be parametrized as follows, SO =. Addition of the angles corresponds to multiplication of the elements of SO, thus both multiplication and inversion are differentiable maps. The orthogonal group also forms an example of a Lie group. All of the examples of Lie groups fall within the class of classical groups. Hilberts fifth problem asked whether replacing differentiable manifolds with topological or analytic ones can yield new examples, if the underlying manifold is allowed to be infinite-dimensional, then one arrives at the notion of an infinite-dimensional Lie group
18.
Compact space
–
In mathematics, and more specifically in general topology, compactness is a property that generalizes the notion of a subset of Euclidean space being closed and bounded. Examples include a closed interval, a rectangle, or a set of points. This notion is defined for general topological spaces than Euclidean space in various ways. One such generalization is that a space is compact if any infinite sequence of points sampled from the space must frequently get arbitrarily close to some point of the space. An equivalent definition is that every sequence of points must have an infinite subsequence that converges to some point of the space, the Heine-Borel theorem states that a subset of Euclidean space is compact in this sequential sense if and only if it is closed and bounded. Thus, if one chooses a number of points in the closed unit interval some of those points must get arbitrarily close to some real number in that space. For instance, some of the numbers 1/2, 4/5, 1/3, 5/6, 1/4, 6/7, the same set of points would not accumulate to any point of the open unit interval, so the open unit interval is not compact. Euclidean space itself is not compact since it is not bounded, in particular, the sequence of points 0, 1, 2, 3, … has no subsequence that converges to any given real number. Apart from closed and bounded subsets of Euclidean space, typical examples of compact spaces include spaces consisting not of geometrical points, the term compact was introduced into mathematics by Maurice Fréchet in 1904 as a distillation of this concept. Various equivalent notions of compactness, including sequential compactness and limit point compactness, in general topological spaces, however, different notions of compactness are not necessarily equivalent. This more subtle notion, introduced by Pavel Alexandrov and Pavel Urysohn in 1929, the term compact set is sometimes a synonym for compact space, but usually refers to a compact subspace of a topological space. In the 19th century, several disparate mathematical properties were understood that would later be seen as consequences of compactness. On the one hand, Bernard Bolzano had been aware that any bounded sequence of points has a subsequence that must eventually get close to some other point. The process could then be repeated by dividing the resulting smaller interval into smaller and smaller parts until it closes down on the limit point. The full significance of Bolzanos theorem, and its method of proof, in the 1880s, it became clear that results similar to the Bolzano–Weierstrass theorem could be formulated for spaces of functions rather than just numbers or geometrical points. The idea of regarding functions as points of a generalized space dates back to the investigations of Giulio Ascoli. The uniform limit of this sequence then played precisely the same role as Bolzanos limit point and this ultimately led to the notion of a compact operator as an offshoot of the general notion of a compact space. It was Maurice Fréchet who, in 1906, had distilled the essence of the Bolzano–Weierstrass property, in 1870, Eduard Heine showed that a continuous function defined on a closed and bounded interval was in fact uniformly continuous
19.
Linear map
–
In mathematics, a linear map is a mapping V → W between two modules that preserves the operations of addition and scalar multiplication. An important special case is when V = W, in case the map is called a linear operator, or an endomorphism of V. Sometimes the term linear function has the meaning as linear map. A linear map always maps linear subspaces onto linear subspaces, for instance it maps a plane through the origin to a plane, Linear maps can often be represented as matrices, and simple examples include rotation and reflection linear transformations. In the language of algebra, a linear map is a module homomorphism. In the language of category theory it is a morphism in the category of modules over a given ring, let V and W be vector spaces over the same field K. e. that for any vectors x1. Am ∈ K, the equality holds, f = a 1 f + ⋯ + a m f. It is then necessary to specify which of these fields is being used in the definition of linear. If V and W are considered as spaces over the field K as above, for example, the conjugation of complex numbers is an R-linear map C → C, but it is not C-linear. A linear map from V to K is called a linear functional and these statements generalize to any left-module RM over a ring R without modification, and to any right-module upon reversing of the scalar multiplication. The zero map between two left-modules over the ring is always linear. The identity map on any module is a linear operator, any homothecy centered in the origin of a vector space, v ↦ c v where c is a scalar, is a linear operator. This does not hold in general for modules, where such a map might only be semilinear, for real numbers, the map x ↦ x2 is not linear. Conversely, any map between finite-dimensional vector spaces can be represented in this manner, see the following section. Differentiation defines a map from the space of all differentiable functions to the space of all functions. It also defines an operator on the space of all smooth functions. If V and W are finite-dimensional vector spaces over a field F, then functions that send linear maps f, V → W to dimF × dimF matrices in the way described in the sequel are themselves linear maps. The expected value of a variable is linear, as for random variables X and Y we have E = E + E and E = aE
20.
Matrix (mathematics)
–
In mathematics, a matrix is a rectangular array of numbers, symbols, or expressions, arranged in rows and columns. For example, the dimensions of the matrix below are 2 ×3, the individual items in an m × n matrix A, often denoted by ai, j, where max i = m and max j = n, are called its elements or entries. Provided that they have the size, two matrices can be added or subtracted element by element. The rule for multiplication, however, is that two matrices can be multiplied only when the number of columns in the first equals the number of rows in the second. Any matrix can be multiplied element-wise by a scalar from its associated field, a major application of matrices is to represent linear transformations, that is, generalizations of linear functions such as f = 4x. The product of two matrices is a matrix that represents the composition of two linear transformations. Another application of matrices is in the solution of systems of linear equations, if the matrix is square, it is possible to deduce some of its properties by computing its determinant. For example, a matrix has an inverse if and only if its determinant is not zero. Insight into the geometry of a transformation is obtainable from the matrixs eigenvalues. Applications of matrices are found in most scientific fields, in computer graphics, they are used to manipulate 3D models and project them onto a 2-dimensional screen. Matrix calculus generalizes classical analytical notions such as derivatives and exponentials to higher dimensions, Matrices are used in economics to describe systems of economic relationships. A major branch of analysis is devoted to the development of efficient algorithms for matrix computations. Matrix decomposition methods simplify computations, both theoretically and practically, algorithms that are tailored to particular matrix structures, such as sparse matrices and near-diagonal matrices, expedite computations in finite element method and other computations. Infinite matrices occur in planetary theory and in atomic theory, a simple example of an infinite matrix is the matrix representing the derivative operator, which acts on the Taylor series of a function. A matrix is an array of numbers or other mathematical objects for which operations such as addition and multiplication are defined. Most commonly, a matrix over a field F is an array of scalars each of which is a member of F. Most of this focuses on real and complex matrices, that is, matrices whose elements are real numbers or complex numbers. More general types of entries are discussed below, for instance, this is a real matrix, A =
21.
Basis (linear algebra)
–
In more general terms, a basis is a linearly independent spanning set. Given a basis of a vector space V, every element of V can be expressed uniquely as a combination of basis vectors. A vector space can have distinct sets of basis vectors, however each such set has the same number of elements. A basis B of a vector space V over a field F is an independent subset of V that spans V. In more detail, suppose that B = is a subset of a vector space V over a field F. The numbers ai are called the coordinates of the vector x with respect to the basis B, a vector space that has a finite basis is called finite-dimensional. To deal with infinite-dimensional spaces, we must generalize the definition to include infinite basis sets. The sums in the definition are all finite because without additional structure the axioms of a vector space do not permit us to meaningfully speak about an infinite sum of vectors. Settings that permit infinite linear combinations allow alternative definitions of the basis concept and it is often convenient to list the basis vectors in a specific order, for example, when considering the transformation matrix of a linear map with respect to a basis. We then speak of a basis, which we define to be a sequence of linearly independent vectors that span V. B is a set of linearly independent vectors, i. e. it is a linearly independent set. Every vector in V can be expressed as a combination of vectors in B in a unique way. If the basis is ordered then the coefficients in this linear combination provide coordinates of the relative to the basis. Every vector space has a basis, the proof of this requires the axiom of choice. All bases of a vector space have the same cardinality, called the dimension of the vector space and this result is known as the dimension theorem, and requires the ultrafilter lemma, a strictly weaker form of the axiom of choice. Also many vector sets can be attributed a standard basis which comprises both spanning and linearly independent vectors, standard bases for example, In Rn, where ei is the ith column of the identity matrix. In P2, where P2 is the set of all polynomials of degree at most 2, is the standard basis. In M22, where M22 is the set of all 2×2 matrices. and Mm, n is the 2×2 matrix with a 1 in the m, n position, given a vector space V over a field F and suppose that and are two bases for V
22.
Determinant
–
In linear algebra, the determinant is a useful value that can be computed from the elements of a square matrix. The determinant of a matrix A is denoted det, detA and it can be viewed as the scaling factor of the transformation described by the matrix. In the case of a 2 ×2 matrix, the formula for the determinant. Each determinant of a 2 ×2 matrix in this equation is called a minor of the matrix A, the same sort of procedure can be used to find the determinant of a 4 ×4 matrix, the determinant of a 5 ×5 matrix, and so forth. The use of determinants in calculus includes the Jacobian determinant in the change of rule for integrals of functions of several variables. Determinants are also used to define the characteristic polynomial of a matrix, in analytical geometry, determinants express the signed n-dimensional volumes of n-dimensional parallelepipeds. Sometimes, determinants are used merely as a notation for expressions that would otherwise be unwieldy to write down. When the entries of the matrix are taken from a field, it can be proven that any matrix has an inverse if. There are various equivalent ways to define the determinant of a square matrix A, i. e. one with the number of rows. Another way to define the determinant is expressed in terms of the columns of the matrix and these properties mean that the determinant is an alternating multilinear function of the columns that maps the identity matrix to the underlying unit scalar. These suffice to uniquely calculate the determinant of any square matrix, provided the underlying scalars form a field, the definition below shows that such a function exists, and it can be shown to be unique. Assume A is a matrix with n rows and n columns. The entries can be numbers or expressions, the definition of the determinant depends only on the fact that they can be added and multiplied together in a commutative manner. The determinant of a 2 ×2 matrix is defined by | a b c d | = a d − b c. If the matrix entries are numbers, the matrix A can be used to represent two linear maps, one that maps the standard basis vectors to the rows of A. In either case, the images of the vectors form a parallelogram that represents the image of the unit square under the mapping. The parallelogram defined by the rows of the matrix is the one with vertices at. The absolute value of ad − bc is the area of the parallelogram, the absolute value of the determinant together with the sign becomes the oriented area of the parallelogram
23.
Matrix multiplication
–
In mathematics, matrix multiplication or the matrix product is a binary operation that produces a matrix from two matrices. The definition is motivated by linear equations and linear transformations on vectors, which have applications in applied mathematics, physics. When two linear transformations are represented by matrices, then the matrix represents the composition of the two transformations. The matrix product is not commutative in general, although it is associative and is distributive over matrix addition, the identity element of the matrix product is the identity matrix, and a square matrix may have an inverse matrix. Determinant multiplicativity applies to the matrix product, the matrix product is also important for matrix groups, and the theory of group representations and irreps. Computing matrix products is both an operation in many numerical algorithms and potentially time consuming, making it one of the most well-studied problems in numerical computing. Various algorithms have been devised for computing C = AB, especially for large matrices, index notation is often the clearest way to express definitions, and is used as standard in the literature. The i, j entry of matrix A is indicated by ij or Aij, whereas a numerical label on a collection of matrices is subscripted only, e. g. A1, A2, assume two matrices are to be multiplied. M, and summing the results over k, i j = ∑ k =1 m A i k B k j. Thus the product AB is defined if the number of columns in A is equal to the number of rows in B. Each entry may be computed one at a time, sometimes, the summation convention is used as it is understood to sum over the repeated index k. To prevent any ambiguity, this convention will not be used in the article, usually the entries are numbers or expressions, but can even be matrices themselves. The matrix product can still be calculated exactly the same way, see below for details on how the matrix product can be calculated in terms of blocks taking the forms of rows and columns. The figure to the right illustrates diagrammatically the product of two matrices A and B, showing how each intersection in the matrix corresponds to a row of A. Note AB and BA are two different matrices, the first is a 1 ×1 matrix while the second is a 3 ×3 matrix, if A =, B =, their matrix product is, A B = =, however BA is not defined. The product of a square matrix multiplied by a column matrix arises naturally in algebra, for solving linear equations. By choosing a, b, c, p, q, r, u, v, w in A appropriately, A can represent a variety of such as rotations, scaling and reflections, shears. If A =, B =, their products are, A B = =
24.
Elementary particle
–
In particle physics, an elementary particle or fundamental particle is a particle whose substructure is unknown, thus, it is unknown whether it is composed of other particles. A particle containing two or more elementary particles is a composite particle, soon, subatomic constituents of the atom were identified. As the 1930s opened, the electron and the proton had been observed, along with the photon, via quantum theory, protons and neutrons were found to contain quarks—up quarks and down quarks—now considered elementary particles. And within a molecule, the three degrees of freedom can separate via wavefunction into three quasiparticles. Yet a free electron—which, not orbiting a nucleus, lacks orbital motion—appears unsplittable. Meanwhile, an elementary boson mediating gravitation—the graviton—remains hypothetical, all elementary particles are—depending on their spin—either bosons or fermions. These are differentiated via the theorem of quantum statistics. Particles of half-integer spin exhibit Fermi–Dirac statistics and are fermions, Particles of integer spin, in other words full-integer, exhibit Bose–Einstein statistics and are bosons. In the Standard Model, elementary particles are represented for predictive utility as point particles, though extremely successful, the Standard Model is limited to the microcosm by its omission of gravitation and has some parameters arbitrarily added but unexplained. According to the current models of big bang nucleosynthesis, the composition of visible matter of the universe should be about 75% hydrogen. Neutrons are made up of one up and two down quark, while protons are made of two up and one down quark. Since the other elementary particles are so light or so rare when compared to atomic nuclei. Therefore, one can conclude that most of the mass of the universe consists of protons and neutrons. Some estimates imply that there are roughly 1080 baryons in the observable universe, the number of protons in the observable universe is called the Eddington number. Other estimates imply that roughly 1097 elementary particles exist in the universe, mostly photons, gravitons. However, the Standard Model is widely considered to be a theory rather than a truly fundamental one. The 12 fundamental fermionic flavours are divided into three generations of four particles each, six of the particles are quarks. The remaining six are leptons, three of which are neutrinos, and the three of which have an electric charge of −1, the electron and its two cousins, the muon and the tau
25.
Spin (physics)
–
In quantum mechanics and particle physics, spin is an intrinsic form of angular momentum carried by elementary particles, composite particles, and atomic nuclei. Spin is one of two types of angular momentum in mechanics, the other being orbital angular momentum. In some ways, spin is like a vector quantity, it has a definite magnitude, all elementary particles of a given kind have the same magnitude of spin angular momentum, which is indicated by assigning the particle a spin quantum number. The SI unit of spin is the or, just as with classical angular momentum, very often, the spin quantum number is simply called spin leaving its meaning as the unitless spin quantum number to be inferred from context. When combined with the theorem, the spin of electrons results in the Pauli exclusion principle. Wolfgang Pauli was the first to propose the concept of spin, in 1925, Ralph Kronig, George Uhlenbeck and Samuel Goudsmit at Leiden University suggested an physical interpretation of particles spinning around their own axis. The mathematical theory was worked out in depth by Pauli in 1927, when Paul Dirac derived his relativistic quantum mechanics in 1928, electron spin was an essential part of it. As the name suggests, spin was originally conceived as the rotation of a particle around some axis and this picture is correct so far as spin obeys the same mathematical laws as quantized angular momenta do. On the other hand, spin has some properties that distinguish it from orbital angular momenta. Although the direction of its spin can be changed, a particle cannot be made to spin faster or slower. The spin of a particle is associated with a magnetic dipole moment with a g-factor differing from 1. This could only occur if the internal charge of the particle were distributed differently from its mass. The conventional definition of the quantum number, s, is s = n/2. Hence the allowed values of s are 0, 1/2,1, 3/2,2, the value of s for an elementary particle depends only on the type of particle, and cannot be altered in any known way. The spin angular momentum, S, of any system is quantized. The allowed values of S are S = ℏ s = h 4 π n, in contrast, orbital angular momentum can only take on integer values of s, i. e. even-numbered values of n. Those particles with half-integer spins, such as 1/2, 3/2, 5/2, are known as fermions, while particles with integer spins. The two families of particles obey different rules and broadly have different roles in the world around us, a key distinction between the two families is that fermions obey the Pauli exclusion principle, that is, there cannot be two identical fermions simultaneously having the same quantum numbers
26.
Angle
–
In planar geometry, an angle is the figure formed by two rays, called the sides of the angle, sharing a common endpoint, called the vertex of the angle. Angles formed by two rays lie in a plane, but this plane does not have to be a Euclidean plane, Angles are also formed by the intersection of two planes in Euclidean and other spaces. Angles formed by the intersection of two curves in a plane are defined as the angle determined by the tangent rays at the point of intersection. Similar statements hold in space, for example, the angle formed by two great circles on a sphere is the dihedral angle between the planes determined by the great circles. Angle is also used to designate the measure of an angle or of a rotation and this measure is the ratio of the length of a circular arc to its radius. In the case of an angle, the arc is centered at the vertex. In the case of a rotation, the arc is centered at the center of the rotation and delimited by any other point and its image by the rotation. The word angle comes from the Latin word angulus, meaning corner, cognate words are the Greek ἀγκύλος, meaning crooked, curved, both are connected with the Proto-Indo-European root *ank-, meaning to bend or bow. Euclid defines a plane angle as the inclination to each other, in a plane, according to Proclus an angle must be either a quality or a quantity, or a relationship. In mathematical expressions, it is common to use Greek letters to serve as variables standing for the size of some angle, lower case Roman letters are also used, as are upper case Roman letters in the context of polygons. See the figures in this article for examples, in geometric figures, angles may also be identified by the labels attached to the three points that define them. For example, the angle at vertex A enclosed by the rays AB, sometimes, where there is no risk of confusion, the angle may be referred to simply by its vertex. However, in geometrical situations it is obvious from context that the positive angle less than or equal to 180 degrees is meant. Otherwise, a convention may be adopted so that ∠BAC always refers to the angle from B to C. Angles smaller than an angle are called acute angles. An angle equal to 1/4 turn is called a right angle, two lines that form a right angle are said to be normal, orthogonal, or perpendicular. Angles larger than an angle and smaller than a straight angle are called obtuse angles. An angle equal to 1/2 turn is called a straight angle, Angles larger than a straight angle but less than 1 turn are called reflex angles
27.
Dot product
–
In mathematics, the dot product or scalar product is an algebraic operation that takes two equal-length sequences of numbers and returns a single number. Sometimes it is called inner product in the context of Euclidean space, algebraically, the dot product is the sum of the products of the corresponding entries of the two sequences of numbers. Geometrically, it is the product of the Euclidean magnitudes of the two vectors and the cosine of the angle between them, the dot product may be defined algebraically or geometrically. The geometric definition is based on the notions of angle and distance, the equivalence of these two definitions relies on having a Cartesian coordinate system for Euclidean space. In such a presentation, the notions of length and angles are not primitive, so the equivalence of the two definitions of the dot product is a part of the equivalence of the classical and the modern formulations of Euclidean geometry. For instance, in space, the dot product of vectors and is. In Euclidean space, a Euclidean vector is an object that possesses both a magnitude and a direction. A vector can be pictured as an arrow and its magnitude is its length, and its direction is the direction that the arrow points. The magnitude of a vector a is denoted by ∥ a ∥, the dot product of two Euclidean vectors a and b is defined by a ⋅ b = ∥ a ∥ ∥ b ∥ cos , where θ is the angle between a and b. In particular, if a and b are orthogonal, then the angle between them is 90° and a ⋅ b =0. The scalar projection of a Euclidean vector a in the direction of a Euclidean vector b is given by a b = ∥ a ∥ cos θ, where θ is the angle between a and b. In terms of the definition of the dot product, this can be rewritten a b = a ⋅ b ^. The dot product is thus characterized geometrically by a ⋅ b = a b ∥ b ∥ = b a ∥ a ∥. The dot product, defined in this manner, is homogeneous under scaling in each variable and it also satisfies a distributive law, meaning that a ⋅ = a ⋅ b + a ⋅ c. These properties may be summarized by saying that the dot product is a bilinear form, moreover, this bilinear form is positive definite, which means that a ⋅ a is never negative and is zero if and only if a =0. En are the basis vectors in Rn, then we may write a = = ∑ i a i e i b = = ∑ i b i e i. The vectors ei are a basis, which means that they have unit length and are at right angles to each other. Hence since these vectors have unit length e i ⋅ e i =1 and since they form right angles with each other, thus in general we can say that, e i ⋅ e j = δ i j
28.
Classical group
–
Of these, the complex classical Lie groups are four infinite families of Lie groups that together with the exceptional groups exhaust the classification of simple Lie groups. The compact classical groups are compact real forms of the classical groups. The finite analogues of the groups are the classical groups of Lie type. The term classical group was coined by Hermann Weyl, it being the title of his 1939 monograph The Classical Groups, the classical groups form the deepest and most useful part of the subject of linear Lie groups. Most types of classical groups find application in classical and modern physics, a few examples are the following. The rotation group SO is a symmetry of Euclidean space and all laws of physics. The special unitary group SU is the group of quantum chromodynamics. The classical groups are exactly the general linear groups over R, C and H together with the groups of non-degenerate forms discussed below. These groups are usually restricted to the subgroups whose elements have determinant 1. The classical groups, with the determinant 1 condition, are listed in the table below, in the sequel, the determinant 1 condition is not used consistently in the interest of greater generality. The complex classical groups are SL, SO and Sp, a group is complex according to whether its Lie algebra is complex. The real classical groups refers to all of the classical groups since any Lie algebra is a real algebra, the compact classical groups are the compact real forms of the complex classical groups. These are, in turn, SU, SO and Sp, one characterization of the compact real form is in terms of the Lie algebra g. If g = u + iu, the complexification of u, then if the connected group K generated by exp, X ∈ u is a compact, the classical groups can uniformly be characterized in a different way using real forms. The classical groups are the following, The complex linear algebraic groups SL, SO, for instance, SO∗ is a real form of SO, SU is a real form of Sl, and Sl is a real form of SO. Without the determinant 1 condition, replace the special linear groups with the general linear groups in the characterization. The algebraic groups in question are Lie groups, but the algebraic qualifier is needed to get the notion of real form. The classical groups are defined in terms of forms defined on Rn, Cn, and Hn, the quaternions, H, do not constitute a field because multiplication does not commute, they form a division ring or a skew field or non-commutative field
29.
Rotation matrix
–
In linear algebra, a rotation matrix is a matrix that is used to perform a rotation in Euclidean space. For example the matrix R = rotates points in the xy-Cartesian plane counter-clockwise through an angle θ about the origin of the Cartesian coordinate system. To perform the rotation using a rotation matrix R, the position of each point must be represented by a vector v. A rotated vector is obtained by using the matrix multiplication Rv, Rotation matrices also provide a means of numerically representing an arbitrary rotation of the axes about the origin, without appealing to angular specification. These coordinate rotations are a way to express the orientation of a camera, or the attitude of a spacecraft. The examples in this article apply to active rotations of vectors counter-clockwise in a coordinate system by pre-multiplication. If any one of these is changed, then the inverse of the matrix should be used. Since matrix multiplication has no effect on the vector, rotation matrices can only be used to describe rotations about the origin of the coordinate system. Rotation matrices provide a description of such rotations, and are used extensively for computations in geometry, physics. Rotation matrices are matrices, with real entries. More specifically, they can be characterized as orthogonal matrices with determinant 1, that is, in some literature, the term rotation is generalized to include improper rotations, characterized by orthogonal matrices with determinant −1. These combine proper rotations with reflections, in other cases, where reflections are not being considered, the label proper may be dropped. This convention is followed in this article, the set of all orthogonal matrices of size n with determinant +1 forms a group known as the special orthogonal group SO. The most important special case is that of the rotation group SO, the set of all orthogonal matrices of size n with determinant +1 or -1 forms the orthogonal group O. In two dimensions, every rotation matrix has the form, R =. This rotates column vectors by means of the matrix multiplication. So the coordinates of the point after rotation are x ′ = x cos θ − y sin θ, y ′ = x sin θ + y cos θ. The direction of rotation is counterclockwise if θ is positive
30.
Standard basis
–
In mathematics, the standard basis for a Euclidean space is the set of unit vectors pointing in the direction of the axes of a Cartesian coordinate system. For example, the basis for the Euclidean plane is formed by vectors e x =, e y =. Here the vector ex points in the x direction, the vector ey points in the y direction, there are several common notations for these vectors, including, and. These vectors are written with a hat to emphasize their status as unit vectors. Each of these vectors is sometimes referred to as the versor of the corresponding Cartesian axis and these vectors are a basis in the sense that any other vector can be expressed uniquely as a linear combination of these. For example, every vector v in three-dimensional space can be written uniquely as v x e x + v y e y + v z e z, the scalars vx, vy, vz being the scalar components of the vector v. In n -dimensional Euclidean space, the standard consists of n distinct vectors. Standard bases can be defined for vector spaces, such as polynomials. In both cases, the standard consists of the elements of the vector space such that all coefficients but one are 0. For polynomials, the standard basis consists of the monomials and is commonly called monomial basis. For matrices M m × n, the standard consists of the m×n-matrices with exactly one non-zero entry. For example, the basis for 2×2 matrices is formed by the 4 matrices e 11 =, e 12 =, e 21 =, e 22 =. By definition, the basis is a sequence of orthogonal unit vectors. In other words, it is an ordered and orthonormal basis, however, an ordered orthonormal basis is not necessarily a standard basis. For instance the two vectors representing a 30° rotation of the 2D standard basis described above, i. e, there is a standard basis also for the ring of polynomials in n indeterminates over a field, namely the monomials. This family is the basis of the R-module R of all families f = from I into a ring R, which are zero except for a finite number of indices, if we interpret 1 as 1R. The existence of standard bases has become a topic of interest in algebraic geometry. It is now a part of theory called standard monomial theory
31.
Subgroup
–
In group theory, a branch of mathematics, given a group G under a binary operation ∗, a subset H of G is called a subgroup of G if H also forms a group under the operation ∗. More precisely, H is a subgroup of G if the restriction of ∗ to H × H is an operation on H. This is usually denoted H ≤ G, read as H is a subgroup of G, the trivial subgroup of any group is the subgroup consisting of just the identity element. A proper subgroup of a group G is a subgroup H which is a subset of G. This is usually represented notationally by H < G, read as H is a subgroup of G. Some authors also exclude the group from being proper. If H is a subgroup of G, then G is sometimes called an overgroup of H, the same definitions apply more generally when G is an arbitrary semigroup, but this article will only deal with subgroups of groups. The group G is sometimes denoted by the pair, usually to emphasize the operation ∗ when G carries multiple algebraic or other structures. This article will write ab for a ∗ b, as is usual, a subset H of the group G is a subgroup of G if and only if it is nonempty and closed under products and inverses. In the case that H is finite, then H is a subgroup if and only if H is closed under products. The above condition can be stated in terms of a homomorphism, the identity of a subgroup is the identity of the group, if G is a group with identity eG, and H is a subgroup of G with identity eH, then eH = eG. The intersection of subgroups A and B is again a subgroup. The union of subgroups A and B is a if and only if either A or B contains the other, since for example 2 and 3 are in the union of 2Z and 3Z. Another example is the union of the x-axis and the y-axis in the plane, each of these objects is a subgroup and this also serves as an example of two subgroups, whose intersection is precisely the identity. An element of G is in <S> if and only if it is a product of elements of S. Every element a of a group G generates the cyclic subgroup <a>, if <a> is isomorphic to Z/nZ for some positive integer n, then n is the smallest positive integer for which an = e, and n is called the order of a. If <a> is isomorphic to Z, then a is said to have infinite order, the subgroups of any given group form a complete lattice under inclusion, called the lattice of subgroups. If e is the identity of G, then the group is the minimum subgroup of G
32.
Orthogonal group
–
Equivalently, it is the group of n×n orthogonal matrices, where the group operation is given by matrix multiplication, an orthogonal matrix is a real matrix whose inverse equals its transpose. An important subgroup of O is the orthogonal group, denoted SO. This group is called the rotation group, because, in dimensions 2 and 3. In low dimension, these groups have been studied, see SO, SO and SO. This is a subgroup of the linear group GL given by O = where QT is the transpose of Q and I is the identity matrix. This article mainly discusses the groups of quadratic forms that may be expressed over some bases as the dot product, over the reals. Over the reals, for any quadratic form, there is a basis. Thus the orthogonal group depends only on the numbers of 1 and of −1, and is denoted O, for details, see indefinite orthogonal group. The derived subgroup Ω of O is an often studied object because, the Cartan–Dieudonné theorem describes the structure of the orthogonal group for a non-singular form. The determinant of any orthogonal matrix is either 1 or −1, the orthogonal n-by-n matrices with determinant 1 form a normal subgroup of O known as the special orthogonal group SO, consisting of all proper rotations. By analogy with GL–SL, the group is sometimes called the general orthogonal group and denoted GO. The term rotation group can be used to either the special or general orthogonal group. When this distinction is to be emphasized, the groups may be denoted O and O, reserving n for the dimension of the space. The letters p or r are also used, indicating the rank of the corresponding Lie algebra, in odd dimension the corresponding Lie algebra is s o, while in even dimension the Lie algebra is s o. In two dimensions, O is the group of all rotations about the origin and all reflections along a line through the origin, SO is the group of all rotations about the origin. These groups are related, SO is a subgroup of O of index 2. More generally, in any number of dimensions an even number of reflections gives a rotation, therefore, the rotations define a subgroup of O, but the reflections do not define a subgroup. A reflection through the origin may be generated as a combination of one reflection along each of the axes, the reflection through the origin is not a reflection in the usual sense in even dimensions, but rather a rotation
33.
Isomorphism
–
In mathematics, an isomorphism is a homomorphism or morphism that admits an inverse. Two mathematical objects are isomorphic if an isomorphism exists between them, an automorphism is an isomorphism whose source and target coincide. For most algebraic structures, including groups and rings, a homomorphism is an isomorphism if, in topology, where the morphisms are continuous functions, isomorphisms are also called homeomorphisms or bicontinuous functions. In mathematical analysis, where the morphisms are functions, isomorphisms are also called diffeomorphisms. A canonical isomorphism is a map that is an isomorphism. Two objects are said to be isomorphic if there is a canonical isomorphism between them. Isomorphisms are formalized using category theory, let R + be the multiplicative group of positive real numbers, and let R be the additive group of real numbers. The logarithm function log, R + → R satisfies log = log x + log y for all x, y ∈ R +, so it is a group homomorphism. The exponential function exp, R → R + satisfies exp = for all x, y ∈ R, the identities log exp x = x and exp log y = y show that log and exp are inverses of each other. Since log is a homomorphism that has an inverse that is also a homomorphism, because log is an isomorphism, it translates multiplication of positive real numbers into addition of real numbers. This facility makes it possible to real numbers using a ruler. Consider the group, the integers from 0 to 5 with addition modulo 6 and these structures are isomorphic under addition, if you identify them using the following scheme, ↦0 ↦1 ↦2 ↦3 ↦4 ↦5 or in general ↦ mod 6. For example, + =, which translates in the system as 1 +3 =4. Even though these two groups look different in that the sets contain different elements, they are indeed isomorphic, more generally, the direct product of two cyclic groups Z m and Z n is isomorphic to if and only if m and n are coprime. For example, R is an ordering ≤ and S an ordering ⊑, such an isomorphism is called an order isomorphism or an isotone isomorphism. If X = Y, then this is a relation-preserving automorphism, in a concrete category, such as the category of topological spaces or categories of algebraic objects like groups, rings, and modules, an isomorphism must be bijective on the underlying sets. In algebraic categories, an isomorphism is the same as a homomorphism which is bijective on underlying sets, in abstract algebra, two basic isomorphisms are defined, Group isomorphism, an isomorphism between groups Ring isomorphism, an isomorphism between rings. Just as the automorphisms of an algebraic structure form a group, letting a particular isomorphism identify the two structures turns this heap into a group
34.
Matrix product
–
In mathematics, matrix multiplication or the matrix product is a binary operation that produces a matrix from two matrices. The definition is motivated by linear equations and linear transformations on vectors, which have applications in applied mathematics, physics. When two linear transformations are represented by matrices, then the matrix represents the composition of the two transformations. The matrix product is not commutative in general, although it is associative and is distributive over matrix addition, the identity element of the matrix product is the identity matrix, and a square matrix may have an inverse matrix. Determinant multiplicativity applies to the matrix product, the matrix product is also important for matrix groups, and the theory of group representations and irreps. Computing matrix products is both an operation in many numerical algorithms and potentially time consuming, making it one of the most well-studied problems in numerical computing. Various algorithms have been devised for computing C = AB, especially for large matrices, index notation is often the clearest way to express definitions, and is used as standard in the literature. The i, j entry of matrix A is indicated by ij or Aij, whereas a numerical label on a collection of matrices is subscripted only, e. g. A1, A2, assume two matrices are to be multiplied. M, and summing the results over k, i j = ∑ k =1 m A i k B k j. Thus the product AB is defined if the number of columns in A is equal to the number of rows in B. Each entry may be computed one at a time, sometimes, the summation convention is used as it is understood to sum over the repeated index k. To prevent any ambiguity, this convention will not be used in the article, usually the entries are numbers or expressions, but can even be matrices themselves. The matrix product can still be calculated exactly the same way, see below for details on how the matrix product can be calculated in terms of blocks taking the forms of rows and columns. The figure to the right illustrates diagrammatically the product of two matrices A and B, showing how each intersection in the matrix corresponds to a row of A. Note AB and BA are two different matrices, the first is a 1 ×1 matrix while the second is a 3 ×3 matrix, if A =, B =, their matrix product is, A B = =, however BA is not defined. The product of a square matrix multiplied by a column matrix arises naturally in algebra, for solving linear equations. By choosing a, b, c, p, q, r, u, v, w in A appropriately, A can represent a variety of such as rotations, scaling and reflections, shears. If A =, B =, their products are, A B = =
35.
General linear group
–
In mathematics, the general linear group of degree n is the set of n×n invertible matrices, together with the operation of ordinary matrix multiplication. This forms a group, because the product of two matrices is again invertible, and the inverse of an invertible matrix is invertible. To be more precise, it is necessary to specify what kind of objects may appear in the entries of the matrix, for example, the general linear group over R is the group of n×n invertible matrices of real numbers, and is denoted by GLn or GL. More generally, the linear group of degree n over any field F, or a ring R, is the set of n×n invertible matrices with entries from F. Typical notation is GLn or GL, or simply GL if the field is understood, more generally still, the general linear group of a vector space GL is the abstract automorphism group, not necessarily written as matrices. The special linear group, written SL or SLn, is the subgroup of GL consisting of matrices with a determinant of 1, the group GL and its subgroups are often called linear groups or matrix groups. These groups are important in the theory of representations, and also arise in the study of spatial symmetries and symmetries of vector spaces in general. The modular group may be realised as a quotient of the linear group SL. If n ≥2, then the group GL is not abelian, if V has finite dimension n, then GL and GL are isomorphic. The isomorphism is not canonical, it depends on a choice of basis in V, in a similar way, for a commutative ring R the group GL may be interpreted as the group of automorphisms of a free R-module M of rank n. One can also define GL for any R-module, but in general this is not isomorphic to GL, over a field F, a matrix is invertible if and only if its determinant is nonzero. Therefore, a definition of GL is as the group of matrices with nonzero determinant. Over a non-commutative ring R, determinants are not at all well behaved, in this case, GL may be defined as the unit group of the matrix ring M. The general linear group GL over the field of numbers is a real Lie group of dimension n2. To see this, note that the set of all n×n real matrices, Mn, the subset GL consists of those matrices whose determinant is non-zero. The determinant is a map, and hence GL is an open affine subvariety of Mn. The Lie algebra of GL, denoted g l n, consists of all n×n real matrices with the serving as the Lie bracket. As a manifold, GL is not connected but rather has two connected components, the matrices with positive determinant and the ones with negative determinant, the identity component, denoted by GL+, consists of the real n×n matrices with positive determinant
36.
Real coordinate space
–
In mathematics, real coordinate space of n dimensions, written Rn is a coordinate space that allows several real variables to be treated as a single variable. With various numbers of dimensions, Rn is used in areas of pure and applied mathematics. With component-wise addition and scalar multiplication, it is the real vector space and is a frequently used representation of Euclidean n-space. Due to the fact, geometric metaphors are widely used for Rn, namely a plane for R2. For any natural n, the set Rn consists of all n-tuples of real numbers. It is called n-dimensional real space, for each n there exists only one Rn, the real n-space. Purely mathematical uses of Rn can be classified as follows. First, linear algebra studies its own properties under vector addition and linear transformations, the third use parametrizes geometric points with elements of Rn, it is common in analytic, differential and algebraic geometries. Rn, together with structures on it, is also extensively used in mathematical physics, dynamical systems theory, mathematical statistics. In applied mathematics, numerical analysis, and so on, arrays, sequences, Any function f of n real variables can be considered as a function on Rn. The use of the real n-space, instead of several variables considered separately, can simplify notation, consider, for n =2, a function composition of the following form, F = f, where functions g1 and g2 are continuous. If ∀x1 ∈ R , f is continuous ∀x2 ∈ R , f is continuous then F is not necessarily continuous, continuity is a stronger condition, the continuity of f in the natural R2 topology, also called multivariable continuity, which is sufficient for continuity of the composition F. The coordinate space Rn forms a vector space over the field of real numbers with the addition of the structure of linearity. The operations on Rn as a space are typically defined by x + y = α x =. The zero vector is given by 0 = and the inverse of the vector x is given by − x =. This structure is important because any n-dimensional real vector space is isomorphic to the vector space Rn, in standard matrix notation, each element of Rn is typically written as a column vector x = and sometimes as a row vector, x =. The coordinate space Rn may then be interpreted as the space of all n × 1 column vectors, or all 1 × n row vectors with the matrix operations of addition. Linear transformations from Rn to Rm may then be written as matrices which act on the elements of Rn via left multiplication and on elements of Rm via right multiplication
37.
Nonabelian group
–
This class of groups contrasts with the abelian groups. Nonabelian groups are pervasive in mathematics and physics, one of the simplest examples of a nonabelian group is the dihedral group of order 6. It is the smallest finite nonabelian group, a common example from physics is the rotation group SO in three dimensions. Both discrete groups and continuous groups may be nonabelian, most of the interesting Lie groups are nonabelian, and these play an important role in gauge theory. Associative algebra Noncommutative geometry Niels Henrik Abel
38.
Linear subspace
–
A linear subspace is usually called simply a subspace when the context serves to distinguish it from other kinds of subspaces. Let K be a field, V be a space over K. Then W is a if, The zero vector,0, is in W. If u and v are elements of W, then the sum u + v is an element of W, take W to be the set of all vectors in V whose last component is 0. Then W is a subspace of V. Proof, Given u and v in W, Thus, u + v is an element of W, too. Given u in W and a c in R, if u = again. Thus, cu is an element of W too, example II, Let the field be R again, but now let the vector space be the Cartesian plane R2. Take W to be the set of points of R2 such that x = y, then W is a subspace of R2. Proof, Let p = and q = be elements of W, then p + q =, since p1 = p2 and q1 = q2, then p1 + q1 = p2 + q2, so p + q is an element of W. Let p = be an element of W, that is, a point in the plane such that p1 = p2, then cp =, since p1 = p2, then cp1 = cp2, so cp is an element of W. In general, any subset of the coordinate space Rn that is defined by a system of homogeneous linear equations will yield a subspace. Geometrically, these subspaces are points, lines, planes, and so on, example III, Again take the field to be R, but now let the vector space V be the set RR of all functions from R to R. Let C be the subset consisting of continuous functions, then C is a subspace of RR. Proof, We know from calculus that 0 ∈ C ⊂ RR and we know from calculus that the sum of continuous functions is continuous. Again, we know from calculus that the product of a continuous function, example IV, Keep the same field and vector space as before, but now consider the set Diff of all differentiable functions. The same sort of argument as before shows that this is a subspace too, examples that extend these themes are common in functional analysis. A way to characterize subspaces is that they are closed under linear combinations, in a topological vector space X, a subspace W need not be closed in general, but a finite-dimensional subspace is always closed. The same is true for subspaces of finite codimension, i. e. determined by a number of continuous linear functionals
39.
Euler's rotation theorem
–
It also means that the composition of two rotations is also a rotation. Therefore the set of rotations has a structure, known as a rotation group. The theorem is named after Leonhard Euler, who proved it in 1775 by means of spherical geometry, the axis of rotation is known as an Euler axis, typically represented by a unit vector e ^. Its product by the angle is known as an axis-angle. The extension of the theorem to kinematics yields the concept of instant axis of rotation, in linear algebra terms, the theorem states that, in 3D space, any two Cartesian coordinate systems with a common origin are related by a rotation about some fixed axis. The eigenvector corresponding to this eigenvalue is the axis of rotation connecting the two systems, Euler states the theorem as follows, Theorema. Quomodocunque sphaera circa centrum suum conuertatur, semper assignari potest diameter, or, When a sphere is moved around its centre it is always possible to find a diameter whose direction in the displaced position is the same as in the initial position. Eulers original proof was made using spherical geometry and therefore whenever he speaks about triangles they must be understood as spherical triangles, to arrive at a proof, Euler analyses what the situation would look like if the theorem were true. Then he considers a great circle that does not contain O, and its image after rotation. He labels a point on their intersection as point A, now A is on the initial circle, so its image will be on the transported circle. He labels that image as point a, since A is also on the transported circle, it is the image of another point that was on the initial circle and he labels that preimage as ɑ. Then he considers the two arcs joining ɑ and a to A and these arcs have the same length because arc ɑA is mapped onto arc Aa. Also, since O is a point, triangle ɑOA is mapped onto triangle AOa, so these triangles are isosceles. Lets construct a point that could be invariant using the previous considerations and we start with the blue great circle and its image under the transformation, which is the red great circle as in the Figure 1. Let point A be a point of intersection of those circles, otherwise we label A’s image as a and its preimage as ɑ, and connect these two points to A with arcs ɑA and Aa. These arcs have the same length, then since ɑA = Aa and O is on the bisector of angle ɑAa, we also have ɑO = aO. Now lets suppose that O is the image of O, then we know angle ɑAO = angle AaO and orientation is preserved*, so O must be interior to angle ɑAa. Now AO is transformed to aO, so AO = aO, since AO is also the same length as aO, angle AaO = angle aAO
40.
Orthogonality
–
The concept of orthogonality has been broadly generalized in mathematics, as well as in areas such as chemistry, and engineering. The word comes from the Greek ὀρθός, meaning upright, and γωνία, the ancient Greek ὀρθογώνιον orthogōnion and classical Latin orthogonium originally denoted a rectangle. Later, they came to mean a right triangle, in the 12th century, the post-classical Latin word orthogonalis came to mean a right angle or something related to a right angle. In geometry, two Euclidean vectors are orthogonal if they are perpendicular, i. e. they form a right angle, two vectors, x and y, in an inner product space, V, are orthogonal if their inner product ⟨ x, y ⟩ is zero. This relationship is denoted x ⊥ y, two vector subspaces, A and B, of an inner product space, V, are called orthogonal subspaces if each vector in A is orthogonal to each vector in B. The largest subspace of V that is orthogonal to a subspace is its orthogonal complement. Given a module M and its dual M∗, an element m′ of M∗, two sets S′ ⊆ M∗ and S ⊆ M are orthogonal if each element of S′ is orthogonal to each element of S. A term rewriting system is said to be if it is left-linear and is non-ambiguous. Orthogonal term rewriting systems are confluent, a set of vectors in an inner product space is called pairwise orthogonal if each pairing of them is orthogonal. Such a set is called an orthogonal set, nonzero pairwise orthogonal vectors are always linearly independent. In certain cases, the normal is used to mean orthogonal. For example, the y-axis is normal to the curve y = x2 at the origin, however, normal may also refer to the magnitude of a vector. In particular, a set is called if it is an orthogonal set of unit vectors. As a result, use of the normal to mean orthogonal is often avoided. The word normal also has a different meaning in probability and statistics, a vector space with a bilinear form generalizes the case of an inner product. When the bilinear form applied to two results in zero, then they are orthogonal. The case of a pseudo-Euclidean plane uses the term hyperbolic orthogonality, in the diagram, axes x′ and t′ are hyperbolic-orthogonal for any given ϕ. In 2-D or higher-dimensional Euclidean space, two vectors are orthogonal if and only if their dot product is zero, i. e. they make an angle of 90°, hence orthogonality of vectors is an extension of the concept of perpendicular vectors into higher-dimensional spaces