1.
Cartesian product
–
In Set theory, a Cartesian product is a mathematical operation that returns a set from multiple sets. That is, for sets A and B, the Cartesian product A × B is the set of all ordered pairs where a ∈ A and b ∈ B, products can be specified using set-builder notation, e. g. A table can be created by taking the Cartesian product of a set of rows, If the Cartesian product rows × columns is taken, the cells of the table contain ordered pairs of the form. More generally, a Cartesian product of n sets, also known as an n-fold Cartesian product, can be represented by an array of n dimensions, an ordered pair is a 2-tuple or couple. The Cartesian product is named after René Descartes, whose formulation of analytic geometry gave rise to the concept, an illustrative example is the standard 52-card deck. The standard playing card ranks form a 13-element set, the card suits form a four-element set. The Cartesian product of these sets returns a 52-element set consisting of 52 ordered pairs, Ranks × Suits returns a set of the form. Suits × Ranks returns a set of the form, both sets are distinct, even disjoint. The main historical example is the Cartesian plane in analytic geometry, usually, such a pairs first and second components are called its x and y coordinates, respectively, cf. picture. The set of all such pairs is thus assigned to the set of all points in the plane, a formal definition of the Cartesian product from set-theoretical principles follows from a definition of ordered pair. The most common definition of ordered pairs, the Kuratowski definition, is =, note that, under this definition, X × Y ⊆ P, where P represents the power set. Therefore, the existence of the Cartesian product of any two sets in ZFC follows from the axioms of pairing, union, power set, let A, B, C, and D be sets. × C ≠ A × If for example A =, then × A = ≠ = A ×, the Cartesian product behaves nicely with respect to intersections, cf. left picture. × = ∩ In most cases the above statement is not true if we replace intersection with union, cf. middle picture. Other properties related with subsets are, if A ⊆ B then A × C ⊆ B × C, the cardinality of a set is the number of elements of the set. For example, defining two sets, A = and B =, both set A and set B consist of two elements each. Their Cartesian product, written as A × B, results in a new set which has the following elements, each element of A is paired with each element of B. Each pair makes up one element of the output set, the number of values in each element of the resulting set is equal to the number of sets whose cartesian product is being taken,2 in this case

2.
Cycle graph (algebra)
–
In group theory, a sub-field of abstract algebra, a group cycle graph illustrates the various cycles of a group and is particularly useful in visualizing the structure of small finite groups. A cycle is the set of powers of a group element a, where an. The element a is said to generate the cycle, in a finite group, some non-zero power of a must be the group identity, e, the lowest such power is the order of the cycle, the number of distinct elements in it. Cycles can overlap, or they can have no element in common, the cycle graph displays each interesting cycle as a polygon. If a generates a cycle of order 6, then a6 = e, then the set of powers of a2, is a cycle, but this is really no new information. Similarly, a5 generates the same cycle as a itself, so, only the primitive cycles need be considered, namely those that are not subsets of another cycle. Each of these is generated by some primitive element, a, take one point for each element of the original group. For each primitive element, connect e to a, a to a2, an−1 to an, etc. until e is reached. The result is the cycle graph, when a2 = e, a has order 2, and is connected to e by two edges. Except when the intent is to emphasize the two edges of the cycle, it is drawn as a single line between the two elements. As an example of a cycle graph, consider the dihedral group Dih4. The multiplication table for this group is shown on the left, notice the cycle e, a, a2, a3. It can be seen from the table that successive powers of a behave this way. In other words,2 = a2,3 = a and this behavior is true for any cycle in any group – a cycle may be traversed in either direction. Cycles that contain a number of elements implicitly have cycles that are not shown in the graph. For the group Dih4 above, we want to draw a line between a2 and e since 2 = e, but since a2 is part of a larger cycle. There can be ambiguity when two cycles share an element that is not the identity element, consider for example, the simple quaternion group, whose cycle graph is shown on the right. Each of the elements in the row when multiplied by itself gives −1

3.
Commutative property
–
In mathematics, a binary operation is commutative if changing the order of the operands does not change the result. It is a property of many binary operations, and many mathematical proofs depend on it. Most familiar as the name of the property that says 3 +4 =4 +3 or 2 ×5 =5 ×2, the property can also be used in more advanced settings. The name is needed there are operations, such as division and subtraction. The commutative property is a property associated with binary operations and functions. If the commutative property holds for a pair of elements under a binary operation then the two elements are said to commute under that operation. The term commutative is used in several related senses, putting on socks resembles a commutative operation since which sock is put on first is unimportant. Either way, the result, is the same, in contrast, putting on underwear and trousers is not commutative. The commutativity of addition is observed when paying for an item with cash, regardless of the order the bills are handed over in, they always give the same total. The multiplication of numbers is commutative, since y z = z y for all y, z ∈ R For example,3 ×5 =5 ×3. Some binary truth functions are also commutative, since the tables for the functions are the same when one changes the order of the operands. For example, the logical biconditional function p ↔ q is equivalent to q ↔ p and this function is also written as p IFF q, or as p ≡ q, or as Epq. Further examples of binary operations include addition and multiplication of complex numbers, addition and scalar multiplication of vectors. Concatenation, the act of joining character strings together, is a noncommutative operation, rotating a book 90° around a vertical axis then 90° around a horizontal axis produces a different orientation than when the rotations are performed in the opposite order. The twists of the Rubiks Cube are noncommutative and this can be studied using group theory. Some non-commutative binary operations, Records of the use of the commutative property go back to ancient times. The Egyptians used the property of multiplication to simplify computing products. Euclid is known to have assumed the property of multiplication in his book Elements

4.
Dimension
–
In physics and mathematics, the dimension of a mathematical space is informally defined as the minimum number of coordinates needed to specify any point within it. Thus a line has a dimension of one only one coordinate is needed to specify a point on it – for example. The inside of a cube, a cylinder or a sphere is three-dimensional because three coordinates are needed to locate a point within these spaces, in classical mechanics, space and time are different categories and refer to absolute space and time. That conception of the world is a space but not the one that was found necessary to describe electromagnetism. The four dimensions of spacetime consist of events that are not absolutely defined spatially and temporally, Minkowski space first approximates the universe without gravity, the pseudo-Riemannian manifolds of general relativity describe spacetime with matter and gravity. Ten dimensions are used to string theory, and the state-space of quantum mechanics is an infinite-dimensional function space. The concept of dimension is not restricted to physical objects, high-dimensional spaces frequently occur in mathematics and the sciences. They may be parameter spaces or configuration spaces such as in Lagrangian or Hamiltonian mechanics, in mathematics, the dimension of an object is an intrinsic property independent of the space in which the object is embedded. This intrinsic notion of dimension is one of the ways the mathematical notion of dimension differs from its common usages. The dimension of Euclidean n-space En is n, when trying to generalize to other types of spaces, one is faced with the question what makes En n-dimensional. One answer is that to cover a ball in En by small balls of radius ε. This observation leads to the definition of the Minkowski dimension and its more sophisticated variant, the Hausdorff dimension, for example, the boundary of a ball in En looks locally like En-1 and this leads to the notion of the inductive dimension. While these notions agree on En, they turn out to be different when one looks at more general spaces, a tesseract is an example of a four-dimensional object. The rest of this section some of the more important mathematical definitions of the dimensions. A complex number has a real part x and an imaginary part y, a single complex coordinate system may be applied to an object having two real dimensions. For example, an ordinary two-dimensional spherical surface, when given a complex metric, complex dimensions appear in the study of complex manifolds and algebraic varieties. The dimension of a space is the number of vectors in any basis for the space. This notion of dimension is referred to as the Hamel dimension or algebraic dimension to distinguish it from other notions of dimension

5.
Associative property
–
In mathematics, the associative property is a property of some binary operations. In propositional logic, associativity is a rule of replacement for expressions in logical proofs. That is, rearranging the parentheses in such an expression will not change its value, consider the following equations, +4 =2 + =92 × = ×4 =24. Even though the parentheses were rearranged on each line, the values of the expressions were not altered, since this holds true when performing addition and multiplication on any real numbers, it can be said that addition and multiplication of real numbers are associative operations. Associativity is not to be confused with commutativity, which addresses whether or not the order of two operands changes the result. For example, the order doesnt matter in the multiplication of numbers, that is. Associative operations are abundant in mathematics, in fact, many algebraic structures explicitly require their binary operations to be associative, however, many important and interesting operations are non-associative, some examples include subtraction, exponentiation and the vector cross product. Z = x = xyz for all x, y, z in S, the associative law can also be expressed in functional notation thus, f = f. If a binary operation is associative, repeated application of the produces the same result regardless how valid pairs of parenthesis are inserted in the expression. This is called the generalized associative law, thus the product can be written unambiguously as abcd. As the number of elements increases, the number of ways to insert parentheses grows quickly. Some examples of associative operations include the following, the two methods produce the same result, string concatenation is associative. In arithmetic, addition and multiplication of numbers are associative, i. e. + z = x + = x + y + z z = x = x y z } for all x, y, z ∈ R. x, y, z\in \mathbb. }Because of associativity. Addition and multiplication of numbers and quaternions are associative. Addition of octonions is also associative, but multiplication of octonions is non-associative, the greatest common divisor and least common multiple functions act associatively. Gcd = gcd = gcd lcm = lcm = lcm } for all x, y, z ∈ Z. x, y, z\in \mathbb. }Taking the intersection or the union of sets, ∩ C = A ∩ = A ∩ B ∩ C ∪ C = A ∪ = A ∪ B ∪ C } for all sets A, B, C. Slightly more generally, given four sets M, N, P and Q, with h, M to N, g, N to P, in short, composition of maps is always associative. Consider a set with three elements, A, B, and C, thus, for example, A=C = A

6.
Cauchy sequence
–
In mathematics, a Cauchy sequence, named after Augustin-Louis Cauchy, is a sequence whose elements become arbitrarily close to each other as the sequence progresses. More precisely, given any small positive distance, all but a number of elements of the sequence are less than that given distance from each other. It is not sufficient for each term to become close to the preceding term. For instance, in the harmonic series ∑1 n a difference between consecutive terms decreases as 1 n, however the series does not converge, rather, it is required that all terms get arbitrarily close to each other, starting from some point. More formally, for any given ε >0 there exists an N such that for any m, n > N. The notions above are not as unfamiliar as they might at first appear, the customary acceptance of the fact that any real number x has a decimal expansion is an implicit acknowledgment that a particular Cauchy sequence of rational numbers has the real limit x. In some cases it may be difficult to describe x independently of such a process involving rational numbers. Generalizations of Cauchy sequences in more abstract uniform spaces exist in the form of Cauchy filters, in a similar way one can define Cauchy sequences of rational or complex numbers. Cauchy formulated such a condition by requiring x m − x n to be infinitesimal for every pair of infinite m, n, to define Cauchy sequences in any metric space X, the absolute value |xm - xn| is replaced by the distance d between xm and xn. A metric space X in which every Cauchy sequence converges to an element of X is called complete, the real numbers are complete under the metric induced by the usual absolute value, and one of the standard constructions of the real numbers involves Cauchy sequences of rational numbers. A rather different type of example is afforded by a metric space X which has the discrete metric, any Cauchy sequence of elements of X must be constant beyond some fixed point, and converges to the eventually repeating term. The rational numbers Q are not complete, There are sequences of rationals that converge to irrational numbers, if one considers this as a sequence of real numbers, however, it converges to the real number φ = /2, the Golden ratio, which is irrational. Every Cauchy sequence of numbers is bounded. Every Cauchy sequence of numbers is bounded, hence by Bolzano-Weierstrass has a convergent subsequence, hence is itself convergent. It should be noted, though, that proof of the completeness of the real numbers implicitly makes use of the least upper bound axiom. The alternative approach, mentioned above, of constructing the real numbers as the completion of the rational numbers, makes the completeness of the real numbers tautological. Such a series ∑ n =1 ∞ x n is considered to be convergent if and only if the sequence of sums is convergent. It is a matter to determine whether the sequence of partial sums is Cauchy or not

7.
Cokernel
–
In mathematics, the cokernel of a linear mapping of vector spaces f, X → Y is the quotient space Y/im of the codomain of f by the image of f. The dimension of the cokernel is called the corank of f, cokernels are dual to the kernels of category theory, hence the name, the kernel is a subobject of the domain, while the cokernel is a quotient object of the codomain. This is elaborated in intuition, below, often the map q is understood, and Q itself is called the cokernel of f. In many situations in abstract algebra, such as for groups, vector spaces or modules. In topological settings, such as with bounded linear operators between Hilbert spaces, one typically has to take the closure of the image before passing to the quotient, one can define the cokernel in the general framework of category theory. In order for the definition to make sense the category in question must have zero morphisms, the cokernel of a morphism f, X → Y is defined as the coequalizer of f and the zero morphism 0XY, X → Y. The cokernel of f, X → Y is an object Q together with a q, Y → Q such that the diagram commutes. Moreover, the morphism q must be universal for this diagram, like all coequalizers, the cokernel q, Y → Q is necessarily an epimorphism. Conversely an epimorphism is called if it is the cokernel of some morphism. A category is called conormal if every epimorphism is normal, in the category of groups, the cokernel of a group homomorphism f, G → H is the quotient of H by the normal closure of the image of f. In the case of groups, since every subgroup is normal. In a preadditive category, it makes sense to add and subtract morphisms, in such a category, the coequalizer of two morphisms f and g is just the cokernel of their difference, coeq = coker . In an abelian category the image and coimage of a morphism f are given by im = ker , in particular, every abelian category is normal. That is, every monomorphism m can be written as the kernel of some morphism. Formally, one may connect the kernel and the cokernel of a map T, V → W by the exact sequence 0 → ker T → V → W → coker T →0. As a simple example, consider the map T, R2 → R2, then for an equation T = to have a solution, we must have a=0, and in that case the solution space is, or equivalently stated, +. Additionally, the cokernel can be thought of as something that detects surjections in the way that the kernel detects injections. A map is if and only if its kernel is trivial

8.
Abstract algebra
–
In algebra, which is a broad division of mathematics, abstract algebra is the study of algebraic structures. Algebraic structures include groups, rings, fields, modules, vector spaces, lattices, the term abstract algebra was coined in the early 20th century to distinguish this area of study from the other parts of algebra. Algebraic structures, with their homomorphisms, form mathematical categories. Category theory is a formalism that allows a way for expressing properties. Universal algebra is a subject that studies types of algebraic structures as single objects. For example, the structure of groups is an object in universal algebra. As in other parts of mathematics, concrete problems and examples have played important roles in the development of abstract algebra, through the end of the nineteenth century, many – perhaps most – of these problems were in some way related to the theory of algebraic equations. Numerous textbooks in abstract algebra start with definitions of various algebraic structures. This creates an impression that in algebra axioms had come first and then served as a motivation. The true order of development was almost exactly the opposite. For example, the numbers of the nineteenth century had kinematic and physical motivations. An archetypical example of this progressive synthesis can be seen in the history of group theory, there were several threads in the early development of group theory, in modern language loosely corresponding to number theory, theory of equations, and geometry. Leonhard Euler considered algebraic operations on numbers modulo an integer, modular arithmetic, lagranges goal was to understand why equations of third and fourth degree admit formulae for solutions, and he identified as key objects permutations of the roots. An important novel step taken by Lagrange in this paper was the view of the roots, i. e. as symbols. However, he did not consider composition of permutations, serendipitously, the first edition of Edward Warings Meditationes Algebraicae appeared in the same year, with an expanded version published in 1782. Waring proved the theorem on symmetric functions, and specially considered the relation between the roots of a quartic equation and its resolvent cubic. Kronecker claimed in 1888 that the study of modern algebra began with this first paper of Vandermonde, cauchy states quite clearly that Vandermonde had priority over Lagrange for this remarkable idea, which eventually led to the study of group theory. Paolo Ruffini was the first person to develop the theory of permutation groups and his goal was to establish the impossibility of an algebraic solution to a general algebraic equation of degree greater than four

9.
Absolutely convex set
–
A set C in a real or complex vector space is said to be absolutely convex or disked if it is convex and balanced, in which case it is called a disk. The absolutely convex hull of the set A is defined to be absconv A =, Vector, for vectors in physics Vector field Robertson, A. P. W. J. Robertson

10.
Additive inverse
–
In mathematics, the additive inverse of a number a is the number that, when added to a, yields zero. This number is known as the opposite, sign change. For a real number, it reverses its sign, the opposite to a number is negative. Zero is the inverse of itself. The additive inverse of a is denoted by unary minus, −a. For example, the inverse of 7 is −7, because 7 + =0. The additive inverse is defined as its inverse element under the operation of addition. As for any operation, double additive inverse has no net effect. For a number and, generally, in any ring, the inverse can be calculated using multiplication by −1. Examples of rings of numbers are integers, rational numbers, real numbers, Additive inverse is closely related to subtraction, which can be viewed as an addition of the opposite, a − b = a +. Conversely, additive inverse can be thought of as subtraction from zero, if such an operation admits an identity element o, then this element is unique. For a given x , if there exists x′ such that x + x′ = o , if + is associative, then an additive inverse is unique. To see this, let x′ and x″ each be additive inverses of x, for example, since addition of real numbers is associative, each real number has a unique additive inverse. All the following examples are in fact abelian groups, complex numbers, on the complex plane, this operation rotates a complex number 180 degrees around the origin. Addition of real- and complex-valued functions, here, the inverse of a function f is the function −f defined by = − f , for all x, such that f + = o . More generally, what precedes applies to all functions with values in a group, sequences, matrices. In a vector space the additive inverse −v is often called the vector of v, it has the same magnitude as the original. Additive inversion corresponds to multiplication by −1