1.
Abstract algebra
–
In algebra, which is a broad division of mathematics, abstract algebra is the study of algebraic structures. Algebraic structures include groups, rings, fields, modules, vector spaces, lattices, the term abstract algebra was coined in the early 20th century to distinguish this area of study from the other parts of algebra. Algebraic structures, with their homomorphisms, form mathematical categories. Category theory is a formalism that allows a way for expressing properties. Universal algebra is a subject that studies types of algebraic structures as single objects. For example, the structure of groups is an object in universal algebra. As in other parts of mathematics, concrete problems and examples have played important roles in the development of abstract algebra, through the end of the nineteenth century, many – perhaps most – of these problems were in some way related to the theory of algebraic equations. Numerous textbooks in abstract algebra start with definitions of various algebraic structures. This creates an impression that in algebra axioms had come first and then served as a motivation. The true order of development was almost exactly the opposite. For example, the numbers of the nineteenth century had kinematic and physical motivations. An archetypical example of this progressive synthesis can be seen in the history of group theory, there were several threads in the early development of group theory, in modern language loosely corresponding to number theory, theory of equations, and geometry. Leonhard Euler considered algebraic operations on numbers modulo an integer, modular arithmetic, lagranges goal was to understand why equations of third and fourth degree admit formulae for solutions, and he identified as key objects permutations of the roots. An important novel step taken by Lagrange in this paper was the view of the roots, i. e. as symbols. However, he did not consider composition of permutations, serendipitously, the first edition of Edward Warings Meditationes Algebraicae appeared in the same year, with an expanded version published in 1782. Waring proved the theorem on symmetric functions, and specially considered the relation between the roots of a quartic equation and its resolvent cubic. Kronecker claimed in 1888 that the study of modern algebra began with this first paper of Vandermonde, cauchy states quite clearly that Vandermonde had priority over Lagrange for this remarkable idea, which eventually led to the study of group theory. Paolo Ruffini was the first person to develop the theory of permutation groups and his goal was to establish the impossibility of an algebraic solution to a general algebraic equation of degree greater than four
2.
Mathematics
–
Mathematics is the study of topics such as quantity, structure, space, and change. There is a range of views among mathematicians and philosophers as to the exact scope, Mathematicians seek out patterns and use them to formulate new conjectures. Mathematicians resolve the truth or falsity of conjectures by mathematical proof, when mathematical structures are good models of real phenomena, then mathematical reasoning can provide insight or predictions about nature. Through the use of abstraction and logic, mathematics developed from counting, calculation, measurement, practical mathematics has been a human activity from as far back as written records exist. The research required to solve mathematical problems can take years or even centuries of sustained inquiry, rigorous arguments first appeared in Greek mathematics, most notably in Euclids Elements. Galileo Galilei said, The universe cannot be read until we have learned the language and it is written in mathematical language, and the letters are triangles, circles and other geometrical figures, without which means it is humanly impossible to comprehend a single word. Without these, one is wandering about in a dark labyrinth, carl Friedrich Gauss referred to mathematics as the Queen of the Sciences. Benjamin Peirce called mathematics the science that draws necessary conclusions, David Hilbert said of mathematics, We are not speaking here of arbitrariness in any sense. Mathematics is not like a game whose tasks are determined by arbitrarily stipulated rules, rather, it is a conceptual system possessing internal necessity that can only be so and by no means otherwise. Albert Einstein stated that as far as the laws of mathematics refer to reality, they are not certain, Mathematics is essential in many fields, including natural science, engineering, medicine, finance and the social sciences. Applied mathematics has led to entirely new mathematical disciplines, such as statistics, Mathematicians also engage in pure mathematics, or mathematics for its own sake, without having any application in mind. There is no clear line separating pure and applied mathematics, the history of mathematics can be seen as an ever-increasing series of abstractions. The earliest uses of mathematics were in trading, land measurement, painting and weaving patterns, in Babylonian mathematics elementary arithmetic first appears in the archaeological record. Numeracy pre-dated writing and numeral systems have many and diverse. Between 600 and 300 BC the Ancient Greeks began a study of mathematics in its own right with Greek mathematics. Mathematics has since been extended, and there has been a fruitful interaction between mathematics and science, to the benefit of both. Mathematical discoveries continue to be made today, the overwhelming majority of works in this ocean contain new mathematical theorems and their proofs. The word máthēma is derived from μανθάνω, while the modern Greek equivalent is μαθαίνω, in Greece, the word for mathematics came to have the narrower and more technical meaning mathematical study even in Classical times
3.
Associative
–
In mathematics, the associative property is a property of some binary operations. In propositional logic, associativity is a rule of replacement for expressions in logical proofs. That is, rearranging the parentheses in such an expression will not change its value, consider the following equations, +4 =2 + =92 × = ×4 =24. Even though the parentheses were rearranged on each line, the values of the expressions were not altered, since this holds true when performing addition and multiplication on any real numbers, it can be said that addition and multiplication of real numbers are associative operations. Associativity is not to be confused with commutativity, which addresses whether or not the order of two operands changes the result. For example, the order doesnt matter in the multiplication of numbers, that is. Associative operations are abundant in mathematics, in fact, many algebraic structures explicitly require their binary operations to be associative, however, many important and interesting operations are non-associative, some examples include subtraction, exponentiation and the vector cross product. Z = x = xyz for all x, y, z in S, the associative law can also be expressed in functional notation thus, f = f. If a binary operation is associative, repeated application of the produces the same result regardless how valid pairs of parenthesis are inserted in the expression. This is called the generalized associative law, thus the product can be written unambiguously as abcd. As the number of elements increases, the number of ways to insert parentheses grows quickly. Some examples of associative operations include the following, the two methods produce the same result, string concatenation is associative. In arithmetic, addition and multiplication of numbers are associative, i. e. + z = x + = x + y + z z = x = x y z } for all x, y, z ∈ R. x, y, z\in \mathbb. }Because of associativity. Addition and multiplication of numbers and quaternions are associative. Addition of octonions is also associative, but multiplication of octonions is non-associative, the greatest common divisor and least common multiple functions act associatively. Gcd = gcd = gcd lcm = lcm = lcm } for all x, y, z ∈ Z. x, y, z\in \mathbb. }Taking the intersection or the union of sets, ∩ C = A ∩ = A ∩ B ∩ C ∪ C = A ∪ = A ∪ B ∪ C } for all sets A, B, C. Slightly more generally, given four sets M, N, P and Q, with h, M to N, g, N to P, in short, composition of maps is always associative. Consider a set with three elements, A, B, and C, thus, for example, A=C = A
4.
Category (mathematics)
–
In mathematics, a category is an algebraic structure that comprises objects that are linked by arrows. A category has two properties, the ability to compose the arrows associatively and the existence of an identity arrow for each object. A simple example is the category of sets, whose objects are sets, on the other hand, any monoid can be understood as a special sort of category, and so can any preorder. In general, the objects and arrows may be abstract entities of any kind, and this is the central idea of category theory, a branch of mathematics which seeks to generalize all of mathematics in terms of objects and arrows, independent of what the objects and arrows represent. Virtually every branch of mathematics can be described in terms of categories. For more extensive motivational background and historical notes, see category theory, two categories are the same if they have the same collection of objects, the same collection of arrows, and the same associative method of composing any pair of arrows. Two categories may also be considered equivalent for purposes of category theory, all of the preceding categories have the identity map as identity arrow and composition as the associative operation on arrows. The classic and still used text on category theory is Categories for the Working Mathematician by Saunders Mac Lane. Other references are given in the References below, the basic definitions in this article are contained within the first few chapters of any of these books. Category theory first appeared in a paper entitled General Theory of Natural Equivalences, written by Samuel Eilenberg, there are many equivalent definitions of a category. One commonly used definition is as follows, a category C consists of a class ob of objects a class hom of morphisms, or arrows, or maps, between the objects. Each morphism f has a source object a and a target object b where a and b are in ob and we write f, a → b, and we say f is a morphism from a to b. From these axioms, one can prove there is exactly one identity morphism for every object. Some authors use a variation of the definition in which each object is identified with the corresponding identity morphism. A category C is called if both ob and hom are actually sets and not proper classes, and large otherwise. A locally small category is a such that for all objects a and b. Many important categories in mathematics, although not small, are at least locally small, the class of all sets together with all functions between sets, where composition is the usual function composition, forms a large category, Set. It is the most basic and the most commonly used category in mathematics, the category Rel consists of all sets, with binary relations as morphisms
5.
Object (category theory)
–
In mathematics, a category is an algebraic structure that comprises objects that are linked by arrows. A category has two properties, the ability to compose the arrows associatively and the existence of an identity arrow for each object. A simple example is the category of sets, whose objects are sets, on the other hand, any monoid can be understood as a special sort of category, and so can any preorder. In general, the objects and arrows may be abstract entities of any kind, and this is the central idea of category theory, a branch of mathematics which seeks to generalize all of mathematics in terms of objects and arrows, independent of what the objects and arrows represent. Virtually every branch of mathematics can be described in terms of categories. For more extensive motivational background and historical notes, see category theory, two categories are the same if they have the same collection of objects, the same collection of arrows, and the same associative method of composing any pair of arrows. Two categories may also be considered equivalent for purposes of category theory, all of the preceding categories have the identity map as identity arrow and composition as the associative operation on arrows. The classic and still used text on category theory is Categories for the Working Mathematician by Saunders Mac Lane. Other references are given in the References below, the basic definitions in this article are contained within the first few chapters of any of these books. Category theory first appeared in a paper entitled General Theory of Natural Equivalences, written by Samuel Eilenberg, there are many equivalent definitions of a category. One commonly used definition is as follows, a category C consists of a class ob of objects a class hom of morphisms, or arrows, or maps, between the objects. Each morphism f has a source object a and a target object b where a and b are in ob and we write f, a → b, and we say f is a morphism from a to b. From these axioms, one can prove there is exactly one identity morphism for every object. Some authors use a variation of the definition in which each object is identified with the corresponding identity morphism. A category C is called if both ob and hom are actually sets and not proper classes, and large otherwise. A locally small category is a such that for all objects a and b. Many important categories in mathematics, although not small, are at least locally small, the class of all sets together with all functions between sets, where composition is the usual function composition, forms a large category, Set. It is the most basic and the most commonly used category in mathematics, the category Rel consists of all sets, with binary relations as morphisms
6.
Function composition
–
In mathematics, function composition is the pointwise application of one function to the result of another to produce a third function. The resulting composite function is denoted g ∘ f, X → Z, the notation g ∘ f is read as g circle f, or g round f, or g composed with f, g after f, g following f, or g of f, or g on f. Intuitively, composing two functions is a process in which the output of the inner function becomes the input of the outer function. The composition of functions is a case of the composition of relations. The composition of functions has some additional properties, Composition of functions on a finite set, If f =, and g =, then g ∘ f =. The composition of functions is always associative—a property inherited from the composition of relations, since there is no distinction between the choices of placement of parentheses, they may be left off without causing any ambiguity. In a strict sense, the composition g ∘ f can be only if fs codomain equals gs domain, in a wider sense it is sufficient that the former is a subset of the latter. The functions g and f are said to commute with each other if g ∘ f = f ∘ g, commutativity is a special property, attained only by particular functions, and often in special circumstances. For example, | x | +3 = | x + 3 | only when x ≥0, the composition of one-to-one functions is always one-to-one. Similarly, the composition of two functions is always onto. It follows that composition of two bijections is also a bijection, the inverse function of a composition has the property that −1 =. Derivatives of compositions involving differentiable functions can be using the chain rule. Higher derivatives of functions are given by Faà di Brunos formula. Suppose one has two functions f, X → X, g, X → X having the domain and codomain. Then one can form chains of transformations composed together, such as f ∘ f ∘ g ∘ f, such chains have the algebraic structure of a monoid, called a transformation monoid or composition monoid. In general, transformation monoids can have remarkably complicated structure, one particular notable example is the de Rham curve. The set of all functions f, X → X is called the transformation semigroup or symmetric semigroup on X. If the transformation are bijective, then the set of all combinations of these functions forms a transformation group
7.
Computer science
–
Computer science is the study of the theory, experimentation, and engineering that form the basis for the design and use of computers. An alternate, more succinct definition of science is the study of automating algorithmic processes that scale. A computer scientist specializes in the theory of computation and the design of computational systems and its fields can be divided into a variety of theoretical and practical disciplines. Some fields, such as computational complexity theory, are highly abstract, other fields still focus on challenges in implementing computation. Human–computer interaction considers the challenges in making computers and computations useful, usable, the earliest foundations of what would become computer science predate the invention of the modern digital computer. Machines for calculating fixed numerical tasks such as the abacus have existed since antiquity, further, algorithms for performing computations have existed since antiquity, even before the development of sophisticated computing equipment. Wilhelm Schickard designed and constructed the first working mechanical calculator in 1623, in 1673, Gottfried Leibniz demonstrated a digital mechanical calculator, called the Stepped Reckoner. He may be considered the first computer scientist and information theorist, for, among other reasons and he started developing this machine in 1834, and in less than two years, he had sketched out many of the salient features of the modern computer. A crucial step was the adoption of a card system derived from the Jacquard loom making it infinitely programmable. Around 1885, Herman Hollerith invented the tabulator, which used punched cards to process statistical information, when the machine was finished, some hailed it as Babbages dream come true. During the 1940s, as new and more powerful computing machines were developed, as it became clear that computers could be used for more than just mathematical calculations, the field of computer science broadened to study computation in general. Computer science began to be established as an academic discipline in the 1950s. The worlds first computer science program, the Cambridge Diploma in Computer Science. The first computer science program in the United States was formed at Purdue University in 1962. Since practical computers became available, many applications of computing have become distinct areas of study in their own rights and it is the now well-known IBM brand that formed part of the computer science revolution during this time. IBM released the IBM704 and later the IBM709 computers, still, working with the IBM was frustrating if you had misplaced as much as one letter in one instruction, the program would crash, and you would have to start the whole process over again. During the late 1950s, the science discipline was very much in its developmental stages. Time has seen significant improvements in the usability and effectiveness of computing technology, modern society has seen a significant shift in the users of computer technology, from usage only by experts and professionals, to a near-ubiquitous user base
8.
String (computer science)
–
In computer programming, a string is traditionally a sequence of characters, either as a literal constant or as some kind of variable. The latter may allow its elements to be mutated and the length changed, a string is generally understood as a data type and is often implemented as an array of bytes that stores a sequence of elements, typically characters, using some character encoding. A string may also more general arrays or other sequence data types and structures. When a string appears literally in source code, it is known as a literal or an anonymous string. In formal languages, which are used in logic and theoretical computer science. Let Σ be a non-empty finite set of symbols, called the alphabet, no assumption is made about the nature of the symbols. A string over Σ is any sequence of symbols from Σ. For example, if Σ =, then 01011 is a string over Σ, the length of a string s is the number of symbols in s and can be any non-negative integer, it is often denoted as |s|. The empty string is the string over Σ of length 0. The set of all strings over Σ of length n is denoted Σn, for example, if Σ =, then Σ2 =. Note that Σ0 = for any alphabet Σ, the set of all strings over Σ of any length is the Kleene closure of Σ and is denoted Σ*. In terms of Σn, Σ ∗ = ⋃ n ∈ N ∪ Σ n For example, if Σ =, although the set Σ* itself is countably infinite, each element of Σ* is a string of finite length. A set of strings over Σ is called a language over Σ. For example, if Σ =, the set of strings with an number of zeros, is a formal language over Σ. Concatenation is an important binary operation on Σ*, for any two strings s and t in Σ*, their concatenation is defined as the sequence of symbols in s followed by the sequence of characters in t, and is denoted st. For example, if Σ =, s = bear, and t = hug, then st = bearhug, String concatenation is an associative, but non-commutative operation. The empty string ε serves as the identity element, for any string s, therefore, the set Σ* and the concatenation operation form a monoid, the free monoid generated by Σ. In addition, the length function defines a monoid homomorphism from Σ* to the non-negative integers, a string s is said to be a substring or factor of t if there exist strings u and v such that t = usv
9.
Finite-state machine
–
A finite-state machine or finite-state automaton, finite automaton, or simply a state machine, is a mathematical model of computation. It is a machine that can be in exactly one of a finite number of states at any given time. The FSM can change from one state to another in response to external inputs. A FSM is defined by a list of its states, its state. The behavior of machines can be observed in many devices in modern society that perform a predetermined sequence of actions depending on a sequence of events with which they are presented. The finite state machine has less power than some other models of computation such as the Turing machine. The computational power distinction means there are tasks that a Turing machine can do. This is because a FSMs memory is limited by the number of states it has, FSMs are studied in the more general field of automata theory. An example of a mechanism that can be modeled by a machine is a turnstile. A turnstile, used to access to subways and amusement park rides, is a gate with three rotating arms at waist height, one across the entryway. Initially the arms are locked, blocking the entry, preventing patrons from passing through, depositing a coin or token in a slot on the turnstile unlocks the arms, allowing a single customer to push through. After the customer passes through, the arms are locked again until another coin is inserted, considered as a state machine, the turnstile has two possible states, Locked and Unlocked. There are two inputs that affect its state, putting a coin in the slot and pushing the arm. In the locked state, pushing on the arm has no effect, no matter how many times the input push is given, putting a coin in – that is, giving the machine a coin input – shifts the state from Locked to Unlocked. In the unlocked state, putting additional coins in has no effect, however, a customer pushing through the arms, giving a push input, shifts the state back to Locked. Each state is represented by a node, edges show the transitions from one state to another. Each arrow is labeled with the input that triggers that transition, an input that doesnt cause a change of state is represented by a circular arrow returning to the original state. The arrow into the Locked node from the dot indicates it is the initial state
10.
Group (mathematics)
–
In mathematics, a group is an algebraic structure consisting of a set of elements equipped with an operation that combines any two elements to form a third element. The operation satisfies four conditions called the group axioms, namely closure and it allows entities with highly diverse mathematical origins in abstract algebra and beyond to be handled in a flexible way while retaining their essential structural aspects. The ubiquity of groups in areas within and outside mathematics makes them a central organizing principle of contemporary mathematics. Groups share a kinship with the notion of symmetry. The concept of a group arose from the study of polynomial equations, after contributions from other fields such as number theory and geometry, the group notion was generalized and firmly established around 1870. Modern group theory—an active mathematical discipline—studies groups in their own right, to explore groups, mathematicians have devised various notions to break groups into smaller, better-understandable pieces, such as subgroups, quotient groups and simple groups. A theory has developed for finite groups, which culminated with the classification of finite simple groups. Since the mid-1980s, geometric group theory, which studies finitely generated groups as objects, has become a particularly active area in group theory. One of the most familiar groups is the set of integers Z which consists of the numbers, −4, −3, −2, −1,0,1,2,3,4. The following properties of integer addition serve as a model for the group axioms given in the definition below. For any two integers a and b, the sum a + b is also an integer and that is, addition of integers always yields an integer. This property is known as closure under addition, for all integers a, b and c, + c = a +. Expressed in words, adding a to b first, and then adding the result to c gives the final result as adding a to the sum of b and c. If a is any integer, then 0 + a = a +0 = a, zero is called the identity element of addition because adding it to any integer returns the same integer. For every integer a, there is a b such that a + b = b + a =0. The integer b is called the element of the integer a and is denoted −a. The integers, together with the operation +, form a mathematical object belonging to a class sharing similar structural aspects. To appropriately understand these structures as a collective, the abstract definition is developed
11.
Quasigroup
–
In mathematics, especially in abstract algebra, a quasigroup is an algebraic structure resembling a group in the sense that division is always possible. Quasigroups differ from groups mainly in that they need not be associative, a quasigroup with an identity element is called a loop. There are at least two structurally equivalent formal definitions of quasigroup, one defines a quasigroup as a set with one binary operation, and the other, from universal algebra, defines a quasigroup as having three primitive operations. The homomorphic image of a quasigroup defined with a binary operation, however. We begin with the first definition, a quasigroup is a set, Q, with a binary operation, ∗, obeying the Latin square property. This states that, for each a and b in Q, the uniqueness requirement can be replaced by the requirement that the magma be cancellative. The unique solutions to these equations are written x = a \ b and y = b / a, the operations \ and / are called, respectively, left and right division. The empty set equipped with the empty binary operation satisfies this definition of a quasigroup, some authors accept the empty quasigroup but others explicitly exclude it. Algebraic structures axiomatized solely by identities are called varieties, many standard results in universal algebra hold only for varieties. Quasigroups are varieties if left and right division are taken as primitive, a quasigroup is a type algebra satisfying the identities, y = x ∗, y = x \, y = ∗ x, y = / x. In other words, Multiplication and division in order, one after the other. Hence if is a quasigroup according to the first definition, then is the same quasigroup in the sense of universal algebra. A loop is a quasigroup with an identity element, that is and it follows that the identity element, e, is unique, and that every element of Q has a unique left and right inverse. Since the presence of an identity element is essential, a loop cannot be empty. e, a loop that is associative is a group. A group can have a non-associative pique isotope, but it cannot have a nonassociative loop isotope, there are also some weaker associativity-like properties which have been given special names. A Bol loop is a loop that either, x ∗ = ∗ z for each x, y and z in Q. A loop that is both a left and right Bol loop is a Moufang loop, a narrower class that is a total symmetric quasigroup in which all conjugates coincide as one operation, xy = x / y = x \ y. Another way to define totally symmetric quasigroup is as a quasigroup which additionally is commutative
12.
Abelian group
–
That is, these are the groups that obey the axiom of commutativity. Abelian groups generalize the arithmetic of addition of integers and they are named after Niels Henrik Abel. The concept of a group is one of the first concepts encountered in undergraduate abstract algebra, from which many other basic concepts, such as modules. The theory of groups is generally simpler than that of their non-abelian counterparts. On the other hand, the theory of abelian groups is an area of current research. An abelian group is a set, A, together with an operation • that combines any two elements a and b to form another element denoted a • b, the symbol • is a general placeholder for a concretely given operation. Identity element There exists an element e in A, such that for all elements a in A, the equation e • a = a • e = a holds. Inverse element For each a in A, there exists an element b in A such that a • b = b • a = e, commutativity For all a, b in A, a • b = b • a. A group in which the operation is not commutative is called a non-abelian group or non-commutative group. There are two main conventions for abelian groups – additive and multiplicative. Generally, the notation is the usual notation for groups, while the additive notation is the usual notation for modules. To verify that a group is abelian, a table – known as a Cayley table – can be constructed in a similar fashion to a multiplication table. If the group is G = under the operation ⋅, the th entry of this contains the product gi ⋅ gj. The group is abelian if and only if this table is symmetric about the main diagonal and this is true since if the group is abelian, then gi ⋅ gj = gj ⋅ gi. This implies that the th entry of the table equals the th entry, every cyclic group G is abelian, because if x, y are in G, then xy = aman = am + n = an + m = anam = yx. Thus the integers, Z, form a group under addition, as do the integers modulo n. Every ring is a group with respect to its addition operation. In a commutative ring the invertible elements, or units, form an abelian multiplicative group, in particular, the real numbers are an abelian group under addition, and the nonzero real numbers are an abelian group under multiplication
13.
Magma (algebra)
–
In abstract algebra, a magma is a basic kind of algebraic structure. Specifically, a magma consists of a set, M, equipped with a binary operation. The binary operation must be closed by definition but no other properties are imposed, the term groupoid was introduced in 1926 by Heinrich Brandt describing his Brandt groupoid. The term was appropriated by B. A. Hausmann. In a couple of reviews of subsequent papers in Zentralblatt, Brandt strongly disagreed with this overloading of terminology, according to Bergman and Hausknecht, There is no generally accepted word for a set with a not necessarily associative binary operation. The term magma was used by Serre and it also appears in Bourbakis Éléments de mathématique, Algèbre, chapitres 1 à3,1970. A magma is a set M matched with an operation, •, the symbol, •, is a general placeholder for a properly defined operation. To qualify as a magma, the set and operation must satisfy the requirement, For all a, b in M. And in mathematical notation, ∀ a, b ∈ M, if • is instead a partial operation, then S is called a partial magma or more often a partial groupoid. A morphism of magmas is a function, f, M → N, mapping magma M to magma N, the magma operation may be applied repeatedly, and in the general, non-associative case, the order matters, which is notated with parentheses. For example, the above is abbreviated to the expression, still containing parentheses. A way to completely the use of parentheses is prefix notation. The set of all possible strings consisting of symbols denoting elements of the magma, the total number of different ways of writing n applications of the magma operator is given by the Catalan number, Cn. Thus, for example, C2 =2, which is just the statement that c, less trivially, C3 =5, d, d, a, and a. The number of non-isomorphic magmas having 0,1,2,3,4, elements are 1,1,10,3330,178981952. The corresponding numbers of simultaneously non-isomorphic and non-antiisomorphic magmas are 1,1,7,1734,89521056, a free magma, MX, on a set, X, is the most general possible magma generated by X. It can be described as the set of words on X with parentheses retained. It can also be viewed, in terms familiar in computer science, the operation is that of joining trees at the root
14.
Lie group
–
In mathematics, a Lie group /ˈliː/ is a group that is also a differentiable manifold, with the property that the group operations are compatible with the smooth structure. Lie groups are named after Sophus Lie, who laid the foundations of the theory of transformation groups. The term groupes de Lie first appeared in French in 1893 in the thesis of Lie’s student Arthur Tresse, an extension of Galois theory to the case of continuous symmetry groups was one of Lies principal motivations. Lie groups are smooth manifolds and as such can be studied using differential calculus. Lie groups play an role in modern geometry, on several different levels. Felix Klein argued in his Erlangen program that one can consider various geometries by specifying an appropriate transformation group that leaves certain geometric properties invariant and this idea later led to the notion of a G-structure, where G is a Lie group of local symmetries of a manifold. On a global level, whenever a Lie group acts on an object, such as a Riemannian or a symplectic manifold. The presence of continuous symmetries expressed via a Lie group action on a manifold places strong constraints on its geometry, Linear actions of Lie groups are especially important, and are studied in representation theory. This insight opened new possibilities in pure algebra, by providing a uniform construction for most finite simple groups, a real Lie group is a group that is also a finite-dimensional real smooth manifold, in which the group operations of multiplication and inversion are smooth maps. Smoothness of the group multiplication μ, G × G → G μ = x y means that μ is a mapping of the product manifold G×G into G. These two requirements can be combined to the requirement that the mapping ↦ x −1 y be a smooth mapping of the product manifold into G. The 2×2 real invertible matrices form a group under multiplication, denoted by GL or by GL2 and this is a four-dimensional noncompact real Lie group. This group is disconnected, it has two connected components corresponding to the positive and negative values of the determinant, the rotation matrices form a subgroup of GL, denoted by SO. It is a Lie group in its own right, specifically, using the rotation angle φ as a parameter, this group can be parametrized as follows, SO =. Addition of the angles corresponds to multiplication of the elements of SO, thus both multiplication and inversion are differentiable maps. The orthogonal group also forms an example of a Lie group. All of the examples of Lie groups fall within the class of classical groups. Hilberts fifth problem asked whether replacing differentiable manifolds with topological or analytic ones can yield new examples, if the underlying manifold is allowed to be infinite-dimensional, then one arrives at the notion of an infinite-dimensional Lie group
15.
Group theory
–
In mathematics and abstract algebra, group theory studies the algebraic structures known as groups. Groups recur throughout mathematics, and the methods of group theory have influenced many parts of algebra, linear algebraic groups and Lie groups are two branches of group theory that have experienced advances and have become subject areas in their own right. Various physical systems, such as crystals and the hydrogen atom, thus group theory and the closely related representation theory have many important applications in physics, chemistry, and materials science. Group theory is central to public key cryptography. The first class of groups to undergo a systematic study was permutation groups, given any set X and a collection G of bijections of X into itself that is closed under compositions and inverses, G is a group acting on X. If X consists of n elements and G consists of all permutations, G is the symmetric group Sn, in general, an early construction due to Cayley exhibited any group as a permutation group, acting on itself by means of the left regular representation. In many cases, the structure of a group can be studied using the properties of its action on the corresponding set. For example, in this way one proves that for n ≥5 and this fact plays a key role in the impossibility of solving a general algebraic equation of degree n ≥5 in radicals. The next important class of groups is given by matrix groups, here G is a set consisting of invertible matrices of given order n over a field K that is closed under the products and inverses. Such a group acts on the vector space Kn by linear transformations. In the case of groups, X is a set, for matrix groups. The concept of a group is closely related with the concept of a symmetry group. The theory of groups forms a bridge connecting group theory with differential geometry. A long line of research, originating with Lie and Klein, the groups themselves may be discrete or continuous. Most groups considered in the first stage of the development of group theory were concrete, having been realized through numbers, permutations, or matrices. It was not until the nineteenth century that the idea of an abstract group as a set with operations satisfying a certain system of axioms began to take hold. A typical way of specifying an abstract group is through a presentation by generators and relations, a significant source of abstract groups is given by the construction of a factor group, or quotient group, G/H, of a group G by a normal subgroup H. Class groups of algebraic number fields were among the earliest examples of factor groups, of much interest in number theory
16.
Ring (mathematics)
–
In mathematics, a ring is one of the fundamental algebraic structures used in abstract algebra. It consists of a set equipped with two operations that generalize the arithmetic operations of addition and multiplication. Through this generalization, theorems from arithmetic are extended to non-numerical objects such as polynomials, series, matrices, the conceptualization of rings started in the 1870s and completed in the 1920s. Key contributors include Dedekind, Hilbert, Fraenkel, and Noether, rings were first formalized as a generalization of Dedekind domains that occur in number theory, and of polynomial rings and rings of invariants that occur in algebraic geometry and invariant theory. Afterward, they proved to be useful in other branches of mathematics such as geometry. A ring is a group with a second binary operation that is associative, is distributive over the abelian group operation. By extension from the integers, the group operation is called addition. Whether a ring is commutative or not has profound implications on its behavior as an abstract object, as a result, commutative ring theory, commonly known as commutative algebra, is a key topic in ring theory. Its development has greatly influenced by problems and ideas occurring naturally in algebraic number theory. The most familiar example of a ring is the set of all integers, Z, −5, −4, −3, −2, −1,0,1,2,3,4,5. The familiar properties for addition and multiplication of integers serve as a model for the axioms for rings, a ring is a set R equipped with two binary operations + and · satisfying the following three sets of axioms, called the ring axioms 1. R is a group under addition, meaning that, + c = a + for all a, b, c in R. a + b = b + a for all a, b in R. There is an element 0 in R such that a +0 = a for all a in R, for each a in R there exists −a in R such that a + =0. R is a monoid under multiplication, meaning that, · c = a · for all a, b, c in R. There is an element 1 in R such that a ·1 = a and 1 · a = a for all a in R.3. Multiplication is distributive with respect to addition, a ⋅ = + for all a, b, c in R. · a = + for all a, b, c in R. As explained in § History below, many follow a alternative convention in which a ring is not defined to have a multiplicative identity. This article adopts the convention that, unless stated, a ring is assumed to have such an identity
17.
Commutative ring
–
In ring theory, a branch of abstract algebra, a commutative ring is a ring in which the multiplication operation is commutative. The study of rings is called commutative algebra. Complementarily, noncommutative algebra is the study of noncommutative rings where multiplication is not or is not required to be commutative. e, operations combining any two elements of the ring to a third. They are called addition and multiplication and commonly denoted by + and ⋅, e. g. a + b, the identity elements for addition and multiplication are denoted 0 and 1, respectively. If the multiplication is commutative, i. e. a ⋅ b = b ⋅ a, in the remainder of this article, all rings will be commutative, unless explicitly stated otherwise. An important example, and in some sense crucial, is the ring of integers Z with the two operations of addition and multiplication, as the multiplication of integers is a commutative operation, this is a commutative ring. It is usually denoted Z as an abbreviation of the German word Zahlen, a field is a commutative ring where every non-zero element a is invertible, i. e. has a multiplicative inverse b such that a ⋅ b =1. Therefore, by definition, any field is a commutative ring, the rational, real and complex numbers form fields. An example is the set of matrices of divided differences with respect to a set of nodes. If R is a commutative ring, then the set of all polynomials in the variable X whose coefficients are in R forms the polynomial ring. The same holds true for several variables, if V is some topological space, for example a subset of some Rn, real- or complex-valued continuous functions on V form a commutative ring. The same is true for differentiable or holomorphic functions, when the two concepts are defined, such as for V a complex manifold, in contrast to fields, where every nonzero element is multiplicatively invertible, the theory of rings is more complicated. There are several notions to cope with that situation, first, an element a of ring R is called a unit if it possesses a multiplicative inverse. Another particular type of element is the zero divisors, i. e. a non-zero element a such that there exists an element b of the ring such that ab =0. If R possesses no zero divisors, it is called an integral domain since it resembles the integers in some ways. Many of the following notions also exist for not necessarily commutative rings, for example, all ideals in a commutative ring are automatically two-sided, which simplifies the situation considerably. Given any subset F = j ∈ J of R, the ideal generated by F is the smallest ideal that contains F. Equivalently, an ideal generated by one element is called a principal ideal. A ring all of whose ideals are principal is called a principal ideal ring, any ring has two ideals, namely the zero ideal and R, the whole ring
18.
Field (mathematics)
–
In mathematics, a field is a set on which are defined addition, subtraction, multiplication, and division, which behave as they do when applied to rational and real numbers. A field is thus an algebraic structure, which is widely used in algebra, number theory. The best known fields are the field of numbers. In addition, the field of numbers is widely used, not only in mathematics. Finite fields are used in most cryptographic protocols used for computer security, any field may be used as the scalars for a vector space, which is the standard general context for linear algebra. Associativity of addition and multiplication For all a, b, and c in F, the following hold, a + = + c. Commutativity of addition and multiplication For all a and b in F, the following hold, a + b = b + a. Existence of additive and multiplicative identity elements There exists an element of F, called the identity element and denoted by 0, such that for all a in F. Likewise, there is an element, called the identity element and denoted by 1, such that for all a in F. To exclude the trivial ring, the identity and the multiplicative identity are required to be distinct. Existence of additive inverses and multiplicative inverses For every a in F, there exists an element −a in F, similarly, for any a in F other than 0, there exists an element a−1 in F, such that a · a−1 =1. In other words, subtraction and division operations exist, distributivity of multiplication over addition For all a, b and c in F, the following equality holds, a · = +. A simple example of a field is the field of numbers, consisting of numbers which can be written as fractions a/b, where a and b are integers. The additive inverse of such a fraction is simply −a/b, to see the latter, note that b a ⋅ a b = b a a b =1. In addition to number systems such as the rationals, there are other. The following example is a field consisting of four elements called O, I, A and B, the notation is chosen such that O plays the role of the additive identity element, and I is the multiplicative identity. One can check that all field axioms are satisfied, for example, A · = A · I = A, which equals A · B + A · A = I + B = A, as required by the distributivity. This field is called a field with four elements
19.
Lattice (order)
–
A lattice is an abstract structure studied in the mathematical subdisciplines of order theory and abstract algebra. It consists of an ordered set in which every two elements have a unique supremum and a unique infimum. An example is given by the numbers, partially ordered by divisibility, for which the unique supremum is the least common multiple. Lattices can also be characterized as algebraic structures satisfying certain axiomatic identities, since the two definitions are equivalent, lattice theory draws on both order theory and universal algebra. Semilattices include lattices, which in turn include Heyting and Boolean algebras and these lattice-like structures all admit order-theoretic as well as algebraic descriptions. If is an ordered set, and S ⊆ L is an arbitrary subset. A set may have many upper bounds, or none at all, an upper bound u of S is said to be its least upper bound, or join, or supremum, if u ≤ x for each upper bound x of S. A set need not have a least upper bound, but it cannot have more than one, dually, l ∈ L is said to be a lower bound of S if l ≤ s for each s ∈ S. A lower bound l of S is said to be its greatest lower bound, or meet, or infimum, a set may have many lower bounds, or none at all, but can have at most one greatest lower bound. A partially ordered set is called a join-semilattice and a meet-semilattice if each two-element subset ⊆ L has a join and a meet, denoted by a ∨ b, is called a lattice if it is both a join- and a meet-semilattice. This definition makes ∨ and ∧ binary operations, both operations are monotone with respect to the order, a1 ≤ a2 and b1 ≤ b2 implies that a1 ∨ b1 ≤ a2 ∨ b2 and a1 ∧ b1 ≤ a2 ∧ b2. It follows by an argument that every non-empty finite subset of a lattice has a least upper bound. With additional assumptions, further conclusions may be possible, see Completeness for more discussion of this subject, a bounded lattice is a lattice that additionally has a greatest element 1 and a least element 0, which satisfy 0 ≤ x ≤1 for every x in L. The greatest and least element is called the maximum and minimum, or the top and bottom element. A partially ordered set is a lattice if and only if every finite set of elements has a join. Taking B to be the empty set, ⋁ = ∨ = ∨0 = ⋁ A and ⋀ = ∧ = ∧1 = ⋀ A which is consistent with the fact that A ∪ ∅ = A. A lattice element y is said to another element x, if y > x. Here, y > x means x ≤ y and x ≠ y
20.
Complemented lattice
–
In the mathematical discipline of order theory, a complemented lattice is a bounded lattice, in which every element a has a complement, i. e. an element b satisfying a ∨ b =1 and a ∧ b =0. A relatively complemented lattice is a such that every interval. An orthocomplementation on a lattice is an involution which is order-reversing. An orthocomplemented lattice satisfying a weak form of the law is called an orthomodular lattice. In distributive lattices, complements are unique, every complemented distributive lattice has a unique orthocomplementation and is in fact a Boolean algebra. A complemented lattice is a lattice, in which every element a has a complement, i. e. an element b such that a ∨ b =1. In general an element may have more than one complement, however, in a distributive lattice every element will have at most one complement. In other words, a complemented lattice is characterized by the property that for every element a in an interval there is an element b such that a ∨ b = d. Such an element b is called a complement of a relative to the interval, a distributive lattice is complemented if and only if it is bounded and relatively complemented. The lattice of subspaces of a vector space provide an example of a lattice that is not, in general. Order-reversing if a ≤ b then b⊥ ≤ a⊥, an orthocomplemented lattice or ortholattice is a bounded lattice which is equipped with an orthocomplementation. The lattice of subspaces of a product space, and the orthogonal complement operation, provides an example of an orthocomplemented lattice that is not, in general. Some complemented lattices Boolean algebras are a case of orthocomplemented lattices. The ortholattices are most often used in logic, where the closed subspaces of a separable Hilbert space represent quantum propositions. Orthocomplemented lattices, like Boolean algebras, satisfy de Morgans laws, a lattice is called modular if for all elements a, b and c the implication if a ≤ c, then a ∨ = ∧ c holds. This is weaker than distributivity, e. g. the above-shown lattice M3 is modular, a natural further weakening of this condition for orthocomplemented lattices, necessary for applications in quantum logic, is to require it only in the special case b = a⊥. An orthomodular lattice is defined as an orthocomplemented lattice such that for any two elements the implication if a ≤ c, then a ∨ = c holds. Lattices of this form are of importance for the study of quantum logic
21.
Heyting algebra
–
In mathematics, a Heyting algebra is a bounded lattice equipped with a binary operation a → b of implication such that c ∧ a ≤ b is equivalent to c ≤ a → b. From a logical standpoint, A → B is by definition the weakest proposition for which modus ponens. Equivalently a Heyting algebra is a lattice whose monoid operation a⋅b is a ∧ b. Like Boolean algebras, Heyting algebras form a variety axiomatizable with finitely many equations, Heyting algebras were introduced by Arend Heyting to formalize intuitionistic logic. As lattices, Heyting algebras are distributive, the open sets of a topological space form such a lattice, and therefore a Heyting algebra. In the finite case every nonempty distributive lattice, in particular every nonempty finite chain, is complete and completely distributive. It follows from the definition that 1 ≤0 → a, although the negation operation ¬a is not part of the definition, it is definable as a →0. The definition implies that a ∧ ¬a =0, making the content of ¬a the proposition that to assume a would lead to a contradiction. It can further be shown that a ≤ ¬¬a, although the converse, ¬¬a ≤ a, is not true in general, that is, double negation does not hold in general in a Heyting algebra. Heyting algebras generalize Boolean algebras in the sense that a Heyting algebra satisfying a ∨ ¬a =1, complete Heyting algebras are a central object of study in pointless topology. The internal logic of a topos is based on the Heyting algebra of subobjects of the terminal object 1 ordered by inclusion. Every Heyting algebra whose set of non-greatest elements has a greatest element is subdirectly irreducible and it follows that even among the finite Heyting algebras there exist infinitely many that are subdirectly irreducible, no two of which have the same equational theory. Hence no finite set of finite Heyting algebras can supply all the counterexamples to non-laws of Heyting algebra, nevertheless, it is decidable whether an equation holds of all Heyting algebras. Heyting algebras are often called pseudo-Boolean algebras, or even Brouwer lattices, although the latter term may denote the dual definition. A Heyting algebra H is a lattice such that for all a and b in H there is a greatest element x of H such that a ∧ x ≤ b. This element is the relative pseudo-complement of a with respect to b and we write 1 and 0 for the largest and the smallest element of H, respectively. In any Heyting algebra, one defines the pseudo-complement ¬a of any element a by setting ¬a =, by definition, a ∧ ¬ a =0, and ¬a is the largest element having this property. However, it is not in general true that a ∨ ¬ a =1, thus ¬ is only a pseudo-complement, not a true complement, a complete Heyting algebra is a Heyting algebra that is a complete lattice
22.
Boolean algebra (structure)
–
In abstract algebra, a Boolean algebra or Boolean lattice is a complemented distributive lattice. This type of algebraic structure captures essential properties of set operations and logic operations. A Boolean algebra can be seen as a generalization of a power set algebra or a field of sets and it is also a special case of a De Morgan algebra and a Kleene algebra. The term Boolean algebra honors George Boole, a self-educated English mathematician, booles formulation differs from that described above in some important respects. For example, conjunction and disjunction in Boole were not a pair of operations. Boolean algebra emerged in the 1860s, in papers written by William Jevons, the first systematic presentation of Boolean algebra and distributive lattices is owed to the 1890 Vorlesungen of Ernst Schröder. The first extensive treatment of Boolean algebra in English is A. N. Whiteheads 1898 Universal Algebra, Boolean algebra as an axiomatic algebraic structure in the modern axiomatic sense begins with a 1904 paper by Edward V. Huntington. Boolean algebra came of age as serious mathematics with the work of Marshall Stone in the 1930s, and with Garrett Birkhoffs 1940 Lattice Theory. In the 1960s, Paul Cohen, Dana Scott, and others found deep new results in mathematical logic and axiomatic set theory using offshoots of Boolean algebra, namely forcing, a Boolean algebra with only one element is called a trivial Boolean algebra or a degenerate Boolean algebra. It follows from the last three pairs of axioms above, or from the axiom, that a = b ∧ a if. The relation ≤ defined by a ≤ b if these equivalent conditions hold, is an order with least element 0. The meet a ∧ b and the join a ∨ b of two elements coincide with their infimum and supremum, respectively, with respect to ≤, the first four pairs of axioms constitute a definition of a bounded lattice. It follows from the first five pairs of axioms that any complement is unique, the set of axioms is self-dual in the sense that if one exchanges ∨ with ∧ and 0 with 1 in an axiom, the result is again an axiom. Therefore, by applying this operation to a Boolean algebra, one obtains another Boolean algebra with the same elements, furthermore, every possible input-output behavior can be modeled by a suitable Boolean expression. The smallest element 0 is the empty set and the largest element 1 is the set S itself, starting with the propositional calculus with κ sentence symbols, form the Lindenbaum algebra. This construction yields a Boolean algebra and it is in fact the free Boolean algebra on κ generators. A truth assignment in propositional calculus is then a Boolean algebra homomorphism from this algebra to the two-element Boolean algebra, interval algebras are useful in the study of Lindenbaum-Tarski algebras, every countable Boolean algebra is isomorphic to an interval algebra. For any natural n, the set of all positive divisors of n, defining a≤b if a divides b
23.
Map of lattices
–
The concept of a lattice arises in order theory, a branch of mathematics. The Hasse diagram below depicts the relationships among some important subclasses of lattices. A boolean algebra is a distributive lattice. A boolean algebra is a heyting algebra, a distributive orthocomplemented lattice is orthomodular. A bounded lattice is a lattice, a residuated lattice is a lattice. A modular complemented lattice is relatively complemented, a boolean algebra is relatively complemented. A relatively complemented lattice is a lattice, a totally ordered set is a distributive lattice. An atomic lattice is a lattice, a semi-lattice is a partially ordered set
24.
Vector space
–
A vector space is a collection of objects called vectors, which may be added together and multiplied by numbers, called scalars in this context. Scalars are often taken to be numbers, but there are also vector spaces with scalar multiplication by complex numbers, rational numbers. The operations of addition and scalar multiplication must satisfy certain requirements, called axioms. Euclidean vectors are an example of a vector space and they represent physical quantities such as forces, any two forces can be added to yield a third, and the multiplication of a force vector by a real multiplier is another force vector. In the same vein, but in a more geometric sense, Vector spaces are the subject of linear algebra and are well characterized by their dimension, which, roughly speaking, specifies the number of independent directions in the space. Infinite-dimensional vector spaces arise naturally in mathematical analysis, as function spaces and these vector spaces are generally endowed with additional structure, which may be a topology, allowing the consideration of issues of proximity and continuity. Among these topologies, those that are defined by a norm or inner product are commonly used. This is particularly the case of Banach spaces and Hilbert spaces, historically, the first ideas leading to vector spaces can be traced back as far as the 17th centurys analytic geometry, matrices, systems of linear equations, and Euclidean vectors. Today, vector spaces are applied throughout mathematics, science and engineering, furthermore, vector spaces furnish an abstract, coordinate-free way of dealing with geometrical and physical objects such as tensors. This in turn allows the examination of local properties of manifolds by linearization techniques, Vector spaces may be generalized in several ways, leading to more advanced notions in geometry and abstract algebra. The concept of space will first be explained by describing two particular examples, The first example of a vector space consists of arrows in a fixed plane. This is used in physics to describe forces or velocities, given any two such arrows, v and w, the parallelogram spanned by these two arrows contains one diagonal arrow that starts at the origin, too. This new arrow is called the sum of the two arrows and is denoted v + w, when a is negative, av is defined as the arrow pointing in the opposite direction, instead. Such a pair is written as, the sum of two such pairs and multiplication of a pair with a number is defined as follows, + = and a =. The first example above reduces to one if the arrows are represented by the pair of Cartesian coordinates of their end points. A vector space over a field F is a set V together with two operations that satisfy the eight axioms listed below, elements of V are commonly called vectors. Elements of F are commonly called scalars, the second operation, called scalar multiplication takes any scalar a and any vector v and gives another vector av. In this article, vectors are represented in boldface to distinguish them from scalars
25.
Linear algebra
–
Linear algebra is the branch of mathematics concerning vector spaces and linear mappings between such spaces. It includes the study of lines, planes, and subspaces, the set of points with coordinates that satisfy a linear equation forms a hyperplane in an n-dimensional space. The conditions under which a set of n hyperplanes intersect in a point is an important focus of study in linear algebra. Such an investigation is initially motivated by a system of linear equations containing several unknowns, such equations are naturally represented using the formalism of matrices and vectors. Linear algebra is central to both pure and applied mathematics, for instance, abstract algebra arises by relaxing the axioms of a vector space, leading to a number of generalizations. Functional analysis studies the infinite-dimensional version of the theory of vector spaces, combined with calculus, linear algebra facilitates the solution of linear systems of differential equations. Because linear algebra is such a theory, nonlinear mathematical models are sometimes approximated by linear models. The study of linear algebra first emerged from the study of determinants, determinants were used by Leibniz in 1693, and subsequently, Gabriel Cramer devised Cramers Rule for solving linear systems in 1750. Later, Gauss further developed the theory of solving linear systems by using Gaussian elimination, the study of matrix algebra first emerged in England in the mid-1800s. In 1844 Hermann Grassmann published his Theory of Extension which included foundational new topics of what is called linear algebra. In 1848, James Joseph Sylvester introduced the term matrix, which is Latin for womb, while studying compositions of linear transformations, Arthur Cayley was led to define matrix multiplication and inverses. Crucially, Cayley used a letter to denote a matrix. In 1882, Hüseyin Tevfik Pasha wrote the book titled Linear Algebra, the first modern and more precise definition of a vector space was introduced by Peano in 1888, by 1900, a theory of linear transformations of finite-dimensional vector spaces had emerged. Linear algebra took its form in the first half of the twentieth century. The use of matrices in quantum mechanics, special relativity, the origin of many of these ideas is discussed in the articles on determinants and Gaussian elimination. Linear algebra first appeared in American graduate textbooks in the 1940s, following work by the School Mathematics Study Group, U. S. high schools asked 12th grade students to do matrix algebra, formerly reserved for college in the 1960s. In France during the 1960s, educators attempted to teach linear algebra through finite-dimensional vector spaces in the first year of secondary school and this was met with a backlash in the 1980s that removed linear algebra from the curriculum. To better suit 21st century applications, such as mining and uncertainty analysis