1.
Set (mathematics)
–
In mathematics, a set is a well-defined collection of distinct objects, considered as an object in its own right. For example, the numbers 2,4, and 6 are distinct objects when considered separately, Sets are one of the most fundamental concepts in mathematics. Developed at the end of the 19th century, set theory is now a part of mathematics. In mathematics education, elementary topics such as Venn diagrams are taught at a young age, the German word Menge, rendered as set in English, was coined by Bernard Bolzano in his work The Paradoxes of the Infinite. A set is a collection of distinct objects. The objects that make up a set can be anything, numbers, people, letters of the alphabet, other sets, Sets are conventionally denoted with capital letters. Sets A and B are equal if and only if they have precisely the same elements. Cantors definition turned out to be inadequate, instead, the notion of a set is taken as a notion in axiomatic set theory. There are two ways of describing, or specifying the members of, a set, one way is by intensional definition, using a rule or semantic description, A is the set whose members are the first four positive integers. B is the set of colors of the French flag, the second way is by extension – that is, listing each member of the set. An extensional definition is denoted by enclosing the list of members in curly brackets, one often has the choice of specifying a set either intensionally or extensionally. In the examples above, for instance, A = C and B = D, there are two important points to note about sets. First, in a definition, a set member can be listed two or more times, for example. However, per extensionality, two definitions of sets which differ only in one of the definitions lists set members multiple times, define, in fact. Hence, the set is identical to the set. The second important point is that the order in which the elements of a set are listed is irrelevant and we can illustrate these two important points with an example, = =. For sets with many elements, the enumeration of members can be abbreviated, for instance, the set of the first thousand positive integers may be specified extensionally as, where the ellipsis indicates that the list continues in the obvious way. Ellipses may also be used where sets have infinitely many members, thus the set of positive even numbers can be written as
2.
Associative property
–
In mathematics, the associative property is a property of some binary operations. In propositional logic, associativity is a rule of replacement for expressions in logical proofs. That is, rearranging the parentheses in such an expression will not change its value, consider the following equations, +4 =2 + =92 × = ×4 =24. Even though the parentheses were rearranged on each line, the values of the expressions were not altered, since this holds true when performing addition and multiplication on any real numbers, it can be said that addition and multiplication of real numbers are associative operations. Associativity is not to be confused with commutativity, which addresses whether or not the order of two operands changes the result. For example, the order doesnt matter in the multiplication of numbers, that is. Associative operations are abundant in mathematics, in fact, many algebraic structures explicitly require their binary operations to be associative, however, many important and interesting operations are non-associative, some examples include subtraction, exponentiation and the vector cross product. Z = x = xyz for all x, y, z in S, the associative law can also be expressed in functional notation thus, f = f. If a binary operation is associative, repeated application of the produces the same result regardless how valid pairs of parenthesis are inserted in the expression. This is called the generalized associative law, thus the product can be written unambiguously as abcd. As the number of elements increases, the number of ways to insert parentheses grows quickly. Some examples of associative operations include the following, the two methods produce the same result, string concatenation is associative. In arithmetic, addition and multiplication of numbers are associative, i. e. + z = x + = x + y + z z = x = x y z } for all x, y, z ∈ R. x, y, z\in \mathbb. }Because of associativity. Addition and multiplication of numbers and quaternions are associative. Addition of octonions is also associative, but multiplication of octonions is non-associative, the greatest common divisor and least common multiple functions act associatively. Gcd = gcd = gcd lcm = lcm = lcm } for all x, y, z ∈ Z. x, y, z\in \mathbb. }Taking the intersection or the union of sets, ∩ C = A ∩ = A ∩ B ∩ C ∪ C = A ∪ = A ∪ B ∪ C } for all sets A, B, C. Slightly more generally, given four sets M, N, P and Q, with h, M to N, g, N to P, in short, composition of maps is always associative. Consider a set with three elements, A, B, and C, thus, for example, A=C = A
3.
Multiplication
–
Multiplication is one of the four elementary, mathematical operations of arithmetic, with the others being addition, subtraction and division. Multiplication can also be visualized as counting objects arranged in a rectangle or as finding the area of a rectangle whose sides have given lengths, the area of a rectangle does not depend on which side is measured first, which illustrates the commutative property. The product of two measurements is a new type of measurement, for multiplying the lengths of the two sides of a rectangle gives its area, this is the subject of dimensional analysis. The inverse operation of multiplication is division, for example, since 4 multiplied by 3 equals 12, then 12 divided by 3 equals 4. Multiplication by 3, followed by division by 3, yields the original number, Multiplication is also defined for other types of numbers, such as complex numbers, and more abstract constructs, like matrices. For these more abstract constructs, the order that the operands are multiplied sometimes does matter, a listing of the many different kinds of products that are used in mathematics is given in the product page. In arithmetic, multiplication is often written using the sign × between the terms, that is, in infix notation, there are other mathematical notations for multiplication, Multiplication is also denoted by dot signs, usually a middle-position dot,5 ⋅2 or 5. 2 The middle dot notation, encoded in Unicode as U+22C5 ⋅ dot operator, is standard in the United States, the United Kingdom, when the dot operator character is not accessible, the interpunct is used. In other countries use a comma as a decimal mark. In algebra, multiplication involving variables is often written as a juxtaposition, the notation can also be used for quantities that are surrounded by parentheses. In matrix multiplication, there is a distinction between the cross and the dot symbols. The cross symbol generally denotes the taking a product of two vectors, yielding a vector as the result, while the dot denotes taking the dot product of two vectors, resulting in a scalar. In computer programming, the asterisk is still the most common notation and this is due to the fact that most computers historically were limited to small character sets that lacked a multiplication sign, while the asterisk appeared on every keyboard. This usage originated in the FORTRAN programming language, the numbers to be multiplied are generally called the factors. The number to be multiplied is called the multiplicand, while the number of times the multiplicand is to be multiplied comes from the multiplier. Usually the multiplier is placed first and the multiplicand is placed second, however sometimes the first factor is the multiplicand, additionally, there are some sources in which the term multiplicand is regarded as a synonym for factor. In algebra, a number that is the multiplier of a variable or expression is called a coefficient, the result of a multiplication is called a product. A product of integers is a multiple of each factor, for example,15 is the product of 3 and 5, and is both a multiple of 3 and a multiple of 5
4.
Group (mathematics)
–
In mathematics, a group is an algebraic structure consisting of a set of elements equipped with an operation that combines any two elements to form a third element. The operation satisfies four conditions called the group axioms, namely closure and it allows entities with highly diverse mathematical origins in abstract algebra and beyond to be handled in a flexible way while retaining their essential structural aspects. The ubiquity of groups in areas within and outside mathematics makes them a central organizing principle of contemporary mathematics. Groups share a kinship with the notion of symmetry. The concept of a group arose from the study of polynomial equations, after contributions from other fields such as number theory and geometry, the group notion was generalized and firmly established around 1870. Modern group theory—an active mathematical discipline—studies groups in their own right, to explore groups, mathematicians have devised various notions to break groups into smaller, better-understandable pieces, such as subgroups, quotient groups and simple groups. A theory has developed for finite groups, which culminated with the classification of finite simple groups. Since the mid-1980s, geometric group theory, which studies finitely generated groups as objects, has become a particularly active area in group theory. One of the most familiar groups is the set of integers Z which consists of the numbers, −4, −3, −2, −1,0,1,2,3,4. The following properties of integer addition serve as a model for the group axioms given in the definition below. For any two integers a and b, the sum a + b is also an integer and that is, addition of integers always yields an integer. This property is known as closure under addition, for all integers a, b and c, + c = a +. Expressed in words, adding a to b first, and then adding the result to c gives the final result as adding a to the sum of b and c. If a is any integer, then 0 + a = a +0 = a, zero is called the identity element of addition because adding it to any integer returns the same integer. For every integer a, there is a b such that a + b = b + a =0. The integer b is called the element of the integer a and is denoted −a. The integers, together with the operation +, form a mathematical object belonging to a class sharing similar structural aspects. To appropriately understand these structures as a collective, the abstract definition is developed
5.
Magma (algebra)
–
In abstract algebra, a magma is a basic kind of algebraic structure. Specifically, a magma consists of a set, M, equipped with a binary operation. The binary operation must be closed by definition but no other properties are imposed, the term groupoid was introduced in 1926 by Heinrich Brandt describing his Brandt groupoid. The term was appropriated by B. A. Hausmann. In a couple of reviews of subsequent papers in Zentralblatt, Brandt strongly disagreed with this overloading of terminology, according to Bergman and Hausknecht, There is no generally accepted word for a set with a not necessarily associative binary operation. The term magma was used by Serre and it also appears in Bourbakis Éléments de mathématique, Algèbre, chapitres 1 à3,1970. A magma is a set M matched with an operation, •, the symbol, •, is a general placeholder for a properly defined operation. To qualify as a magma, the set and operation must satisfy the requirement, For all a, b in M. And in mathematical notation, ∀ a, b ∈ M, if • is instead a partial operation, then S is called a partial magma or more often a partial groupoid. A morphism of magmas is a function, f, M → N, mapping magma M to magma N, the magma operation may be applied repeatedly, and in the general, non-associative case, the order matters, which is notated with parentheses. For example, the above is abbreviated to the expression, still containing parentheses. A way to completely the use of parentheses is prefix notation. The set of all possible strings consisting of symbols denoting elements of the magma, the total number of different ways of writing n applications of the magma operator is given by the Catalan number, Cn. Thus, for example, C2 =2, which is just the statement that c, less trivially, C3 =5, d, d, a, and a. The number of non-isomorphic magmas having 0,1,2,3,4, elements are 1,1,10,3330,178981952. The corresponding numbers of simultaneously non-isomorphic and non-antiisomorphic magmas are 1,1,7,1734,89521056, a free magma, MX, on a set, X, is the most general possible magma generated by X. It can be described as the set of words on X with parentheses retained. It can also be viewed, in terms familiar in computer science, the operation is that of joining trees at the root
6.
Commutative property
–
In mathematics, a binary operation is commutative if changing the order of the operands does not change the result. It is a property of many binary operations, and many mathematical proofs depend on it. Most familiar as the name of the property that says 3 +4 =4 +3 or 2 ×5 =5 ×2, the property can also be used in more advanced settings. The name is needed there are operations, such as division and subtraction. The commutative property is a property associated with binary operations and functions. If the commutative property holds for a pair of elements under a binary operation then the two elements are said to commute under that operation. The term commutative is used in several related senses, putting on socks resembles a commutative operation since which sock is put on first is unimportant. Either way, the result, is the same, in contrast, putting on underwear and trousers is not commutative. The commutativity of addition is observed when paying for an item with cash, regardless of the order the bills are handed over in, they always give the same total. The multiplication of numbers is commutative, since y z = z y for all y, z ∈ R For example,3 ×5 =5 ×3. Some binary truth functions are also commutative, since the tables for the functions are the same when one changes the order of the operands. For example, the logical biconditional function p ↔ q is equivalent to q ↔ p and this function is also written as p IFF q, or as p ≡ q, or as Epq. Further examples of binary operations include addition and multiplication of complex numbers, addition and scalar multiplication of vectors. Concatenation, the act of joining character strings together, is a noncommutative operation, rotating a book 90° around a vertical axis then 90° around a horizontal axis produces a different orientation than when the rotations are performed in the opposite order. The twists of the Rubiks Cube are noncommutative and this can be studied using group theory. Some non-commutative binary operations, Records of the use of the commutative property go back to ancient times. The Egyptians used the property of multiplication to simplify computing products. Euclid is known to have assumed the property of multiplication in his book Elements
7.
Matrix multiplication
–
In mathematics, matrix multiplication or the matrix product is a binary operation that produces a matrix from two matrices. The definition is motivated by linear equations and linear transformations on vectors, which have applications in applied mathematics, physics. When two linear transformations are represented by matrices, then the matrix represents the composition of the two transformations. The matrix product is not commutative in general, although it is associative and is distributive over matrix addition, the identity element of the matrix product is the identity matrix, and a square matrix may have an inverse matrix. Determinant multiplicativity applies to the matrix product, the matrix product is also important for matrix groups, and the theory of group representations and irreps. Computing matrix products is both an operation in many numerical algorithms and potentially time consuming, making it one of the most well-studied problems in numerical computing. Various algorithms have been devised for computing C = AB, especially for large matrices, index notation is often the clearest way to express definitions, and is used as standard in the literature. The i, j entry of matrix A is indicated by ij or Aij, whereas a numerical label on a collection of matrices is subscripted only, e. g. A1, A2, assume two matrices are to be multiplied. M, and summing the results over k, i j = ∑ k =1 m A i k B k j. Thus the product AB is defined if the number of columns in A is equal to the number of rows in B. Each entry may be computed one at a time, sometimes, the summation convention is used as it is understood to sum over the repeated index k. To prevent any ambiguity, this convention will not be used in the article, usually the entries are numbers or expressions, but can even be matrices themselves. The matrix product can still be calculated exactly the same way, see below for details on how the matrix product can be calculated in terms of blocks taking the forms of rows and columns. The figure to the right illustrates diagrammatically the product of two matrices A and B, showing how each intersection in the matrix corresponds to a row of A. Note AB and BA are two different matrices, the first is a 1 ×1 matrix while the second is a 3 ×3 matrix, if A =, B =, their matrix product is, A B = =, however BA is not defined. The product of a square matrix multiplied by a column matrix arises naturally in algebra, for solving linear equations. By choosing a, b, c, p, q, r, u, v, w in A appropriately, A can represent a variety of such as rotations, scaling and reflections, shears. If A =, B =, their products are, A B = =
8.
Abelian group
–
That is, these are the groups that obey the axiom of commutativity. Abelian groups generalize the arithmetic of addition of integers and they are named after Niels Henrik Abel. The concept of a group is one of the first concepts encountered in undergraduate abstract algebra, from which many other basic concepts, such as modules. The theory of groups is generally simpler than that of their non-abelian counterparts. On the other hand, the theory of abelian groups is an area of current research. An abelian group is a set, A, together with an operation • that combines any two elements a and b to form another element denoted a • b, the symbol • is a general placeholder for a concretely given operation. Identity element There exists an element e in A, such that for all elements a in A, the equation e • a = a • e = a holds. Inverse element For each a in A, there exists an element b in A such that a • b = b • a = e, commutativity For all a, b in A, a • b = b • a. A group in which the operation is not commutative is called a non-abelian group or non-commutative group. There are two main conventions for abelian groups – additive and multiplicative. Generally, the notation is the usual notation for groups, while the additive notation is the usual notation for modules. To verify that a group is abelian, a table – known as a Cayley table – can be constructed in a similar fashion to a multiplication table. If the group is G = under the operation ⋅, the th entry of this contains the product gi ⋅ gj. The group is abelian if and only if this table is symmetric about the main diagonal and this is true since if the group is abelian, then gi ⋅ gj = gj ⋅ gi. This implies that the th entry of the table equals the th entry, every cyclic group G is abelian, because if x, y are in G, then xy = aman = am + n = an + m = anam = yx. Thus the integers, Z, form a group under addition, as do the integers modulo n. Every ring is a group with respect to its addition operation. In a commutative ring the invertible elements, or units, form an abelian multiplicative group, in particular, the real numbers are an abelian group under addition, and the nonzero real numbers are an abelian group under multiplication
9.
Monoid
–
In abstract algebra, a branch of mathematics, a monoid is an algebraic structure with a single associative binary operation and an identity element. Monoids are studied in semigroup theory as they are semigroups with identity, monoids occur in several branches of mathematics, for instance, they can be regarded as categories with a single object. Thus, they capture the idea of composition within a set. In fact, all functions from a set into itself form naturally a monoid with respect to function composition, monoids are also commonly used in computer science, both in its foundational aspects and in practical programming. The set of strings built from a set of characters is a free monoid. The transition monoid and syntactic monoid are used in describing finite state machines, whereas trace monoids and history provide a foundation for process calculi. Some of the more important results in the study of monoids are the Krohn–Rhodes theorem, the history of monoids, as well as a discussion of additional general properties, are found in the article on semigroups. Identity element There exists an element e in S such that for every element a in S, in other words, a monoid is a semigroup with an identity element. It can also be thought of as a magma with associativity and identity, the identity element of a monoid is unique. A monoid in which each element has an inverse is a group. Depending on the context, the symbol for the operation may be omitted, so that the operation is denoted by juxtaposition, for example. This notation does not imply that it is numbers being multiplied, N is thus a monoid under the binary operation inherited from M. If there is a generator of M that has finite cardinality, not every set S will generate a monoid, as the generated structure may lack an identity element. A monoid whose operation is commutative is called a commutative monoid, commutative monoids are often written additively. Any commutative monoid is endowed with its algebraic preordering ≤, defined by x ≤ y if there exists z such that x + z = y. An order-unit of a commutative monoid M is an element u of M such that for any element x of M, there exists a positive integer n such that x ≤ nu. This is often used in case M is the cone of a partially ordered abelian group G. A monoid for which the operation is commutative for some, but not all elements is a trace monoid, trace monoids commonly occur in the theory of concurrent computation
10.
String (computer science)
–
In computer programming, a string is traditionally a sequence of characters, either as a literal constant or as some kind of variable. The latter may allow its elements to be mutated and the length changed, a string is generally understood as a data type and is often implemented as an array of bytes that stores a sequence of elements, typically characters, using some character encoding. A string may also more general arrays or other sequence data types and structures. When a string appears literally in source code, it is known as a literal or an anonymous string. In formal languages, which are used in logic and theoretical computer science. Let Σ be a non-empty finite set of symbols, called the alphabet, no assumption is made about the nature of the symbols. A string over Σ is any sequence of symbols from Σ. For example, if Σ =, then 01011 is a string over Σ, the length of a string s is the number of symbols in s and can be any non-negative integer, it is often denoted as |s|. The empty string is the string over Σ of length 0. The set of all strings over Σ of length n is denoted Σn, for example, if Σ =, then Σ2 =. Note that Σ0 = for any alphabet Σ, the set of all strings over Σ of any length is the Kleene closure of Σ and is denoted Σ*. In terms of Σn, Σ ∗ = ⋃ n ∈ N ∪ Σ n For example, if Σ =, although the set Σ* itself is countably infinite, each element of Σ* is a string of finite length. A set of strings over Σ is called a language over Σ. For example, if Σ =, the set of strings with an number of zeros, is a formal language over Σ. Concatenation is an important binary operation on Σ*, for any two strings s and t in Σ*, their concatenation is defined as the sequence of symbols in s followed by the sequence of characters in t, and is denoted st. For example, if Σ =, s = bear, and t = hug, then st = bearhug, String concatenation is an associative, but non-commutative operation. The empty string ε serves as the identity element, for any string s, therefore, the set Σ* and the concatenation operation form a monoid, the free monoid generated by Σ. In addition, the length function defines a monoid homomorphism from Σ* to the non-negative integers, a string s is said to be a substring or factor of t if there exist strings u and v such that t = usv
11.
Integer
–
An integer is a number that can be written without a fractional component. For example,21,4,0, and −2048 are integers, while 9.75, 5 1⁄2, the set of integers consists of zero, the positive natural numbers, also called whole numbers or counting numbers, and their additive inverses. This is often denoted by a boldface Z or blackboard bold Z standing for the German word Zahlen, ℤ is a subset of the sets of rational and real numbers and, like the natural numbers, is countably infinite. The integers form the smallest group and the smallest ring containing the natural numbers, in algebraic number theory, the integers are sometimes called rational integers to distinguish them from the more general algebraic integers. In fact, the integers are the integers that are also rational numbers. Like the natural numbers, Z is closed under the operations of addition and multiplication, that is, however, with the inclusion of the negative natural numbers, and, importantly,0, Z is also closed under subtraction. The integers form a ring which is the most basic one, in the following sense, for any unital ring. This universal property, namely to be an object in the category of rings. Z is not closed under division, since the quotient of two integers, need not be an integer, although the natural numbers are closed under exponentiation, the integers are not. The following lists some of the properties of addition and multiplication for any integers a, b and c. In the language of algebra, the first five properties listed above for addition say that Z under addition is an abelian group. As a group under addition, Z is a cyclic group, in fact, Z under addition is the only infinite cyclic group, in the sense that any infinite cyclic group is isomorphic to Z. The first four properties listed above for multiplication say that Z under multiplication is a commutative monoid. However, not every integer has an inverse, e. g. there is no integer x such that 2x =1, because the left hand side is even. This means that Z under multiplication is not a group, all the rules from the above property table, except for the last, taken together say that Z together with addition and multiplication is a commutative ring with unity. It is the prototype of all objects of algebraic structure. Only those equalities of expressions are true in Z for all values of variables, note that certain non-zero integers map to zero in certain rings. The lack of zero-divisors in the means that the commutative ring Z is an integral domain
12.
Quasigroup
–
In mathematics, especially in abstract algebra, a quasigroup is an algebraic structure resembling a group in the sense that division is always possible. Quasigroups differ from groups mainly in that they need not be associative, a quasigroup with an identity element is called a loop. There are at least two structurally equivalent formal definitions of quasigroup, one defines a quasigroup as a set with one binary operation, and the other, from universal algebra, defines a quasigroup as having three primitive operations. The homomorphic image of a quasigroup defined with a binary operation, however. We begin with the first definition, a quasigroup is a set, Q, with a binary operation, ∗, obeying the Latin square property. This states that, for each a and b in Q, the uniqueness requirement can be replaced by the requirement that the magma be cancellative. The unique solutions to these equations are written x = a \ b and y = b / a, the operations \ and / are called, respectively, left and right division. The empty set equipped with the empty binary operation satisfies this definition of a quasigroup, some authors accept the empty quasigroup but others explicitly exclude it. Algebraic structures axiomatized solely by identities are called varieties, many standard results in universal algebra hold only for varieties. Quasigroups are varieties if left and right division are taken as primitive, a quasigroup is a type algebra satisfying the identities, y = x ∗, y = x \, y = ∗ x, y = / x. In other words, Multiplication and division in order, one after the other. Hence if is a quasigroup according to the first definition, then is the same quasigroup in the sense of universal algebra. A loop is a quasigroup with an identity element, that is and it follows that the identity element, e, is unique, and that every element of Q has a unique left and right inverse. Since the presence of an identity element is essential, a loop cannot be empty. e, a loop that is associative is a group. A group can have a non-associative pique isotope, but it cannot have a nonassociative loop isotope, there are also some weaker associativity-like properties which have been given special names. A Bol loop is a loop that either, x ∗ = ∗ z for each x, y and z in Q. A loop that is both a left and right Bol loop is a Moufang loop, a narrower class that is a total symmetric quasigroup in which all conjugates coincide as one operation, xy = x / y = x \ y. Another way to define totally symmetric quasigroup is as a quasigroup which additionally is commutative
13.
Division (mathematics)
–
Division is one of the four basic operations of arithmetic, the others being addition, subtraction, and multiplication. The division of two numbers is the process of calculating the number of times one number is contained within one another. For example, in the picture on the right, the 20 apples are divided into groups of five apples, Division can also be thought of as the process of evaluating a fraction, and fractional notation is commonly used to represent division. Division is the inverse of multiplication, if a × b = c, then a = c ÷ b, as long as b is not zero. Division by zero is undefined for the numbers and most other contexts, because if b =0, then a cannot be deduced from b and c. In some contexts, division by zero can be defined although to a limited extent, in division, the dividend is divided by the divisor to get a quotient. In the above example,20 is the dividend, five is the divisor, in some cases, the divisor may not be contained fully by the dividend, for example,10 ÷3 leaves a remainder of one, as 10 is not a multiple of three. Sometimes this remainder is added to the quotient as a fractional part, but in the context of integer division, where numbers have no fractional part, the remainder is kept separately or discarded. Besides dividing apples, division can be applied to other physical, Division has been defined in several contexts, such as for the real and complex numbers and for more abstract contexts such as for vector spaces and fields. Division is the most mentally difficult of the four operations of arithmetic. Teaching the objective concept of dividing integers introduces students to the arithmetic of fractions, unlike addition, subtraction, and multiplication, the set of all integers is not closed under division. Dividing two integers may result in a remainder, to complete the division of the remainder, the number system is extended to include fractions or rational numbers as they are more generally called. When students advance to algebra, the theory of division intuited from arithmetic naturally extends to algebraic division of variables, polynomials. Division is often shown in algebra and science by placing the dividend over the divisor with a line, also called a fraction bar. For example, a divided by b is written a b This can be read out loud as a divided by b, a fraction is a division expression where both dividend and divisor are integers, and there is no implication that the division must be evaluated further. A second way to show division is to use the obelus, common in arithmetic, in this manner, ISO 80000-2-9.6 states it should not be used. The obelus is also used alone to represent the operation itself. In some non-English-speaking cultures, a divided by b is written a, b and this notation was introduced in 1631 by William Oughtred in his Clavis Mathematicae and later popularized by Gottfried Wilhelm Leibniz
14.
Theoretical computer science
–
It is not easy to circumscribe the theoretical areas precisely. Work in this field is often distinguished by its emphasis on mathematical technique, despite this broad scope, the theory people in computer science self-identify as different from the applied people. Some characterize themselves as doing the science underlying the field of computing, other theory-applied people suggest that it is impossible to separate theory and application. This means that the theory people regularly use experimental science done in less-theoretical areas such as software system research. It also means there is more cooperation than mutually exclusive competition between theory and application. These developments have led to the study of logic and computability. Information theory was added to the field with a 1948 mathematical theory of communication by Claude Shannon, in the same decade, Donald Hebb introduced a mathematical model of learning in the brain. With mounting biological data supporting this hypothesis with some modification, the fields of neural networks, in 1971, Stephen Cook and, working independently, Leonid Levin, proved that there exist practically relevant problems that are NP-complete – a landmark result in computational complexity theory. With the development of mechanics in the beginning of the 20th century came the concept that mathematical operations could be performed on an entire particle wavefunction. In other words, one could compute functions on multiple states simultaneously, modern theoretical computer science research is based on these basic developments, but includes many other mathematical and interdisciplinary problems that have been posed. An algorithm is a procedure for calculations. Algorithms are used for calculation, data processing, and automated reasoning, an algorithm is an effective method expressed as a finite list of well-defined instructions for calculating a function. The transition from one state to the next is not necessarily deterministic, some algorithms, known as randomized algorithms, a data structure is a particular way of organizing data in a computer so that it can be used efficiently. Different kinds of structures are suited to different kinds of applications. For example, databases use B-tree indexes for small percentages of data retrieval and compilers, data structures provide a means to manage large amounts of data efficiently for uses such as large databases and internet indexing services. Usually, efficient data structures are key to designing efficient algorithms, some formal design methods and programming languages emphasize data structures, rather than algorithms, as the key organizing factor in software design. Storing and retrieving can be carried out on data stored in main memory and in secondary memory. A problem is regarded as inherently difficult if its solution requires significant resources, the theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying the amount of resources needed to solve them, such as time and storage
15.
Finite-state machine
–
A finite-state machine or finite-state automaton, finite automaton, or simply a state machine, is a mathematical model of computation. It is a machine that can be in exactly one of a finite number of states at any given time. The FSM can change from one state to another in response to external inputs. A FSM is defined by a list of its states, its state. The behavior of machines can be observed in many devices in modern society that perform a predetermined sequence of actions depending on a sequence of events with which they are presented. The finite state machine has less power than some other models of computation such as the Turing machine. The computational power distinction means there are tasks that a Turing machine can do. This is because a FSMs memory is limited by the number of states it has, FSMs are studied in the more general field of automata theory. An example of a mechanism that can be modeled by a machine is a turnstile. A turnstile, used to access to subways and amusement park rides, is a gate with three rotating arms at waist height, one across the entryway. Initially the arms are locked, blocking the entry, preventing patrons from passing through, depositing a coin or token in a slot on the turnstile unlocks the arms, allowing a single customer to push through. After the customer passes through, the arms are locked again until another coin is inserted, considered as a state machine, the turnstile has two possible states, Locked and Unlocked. There are two inputs that affect its state, putting a coin in the slot and pushing the arm. In the locked state, pushing on the arm has no effect, no matter how many times the input push is given, putting a coin in – that is, giving the machine a coin input – shifts the state from Locked to Unlocked. In the unlocked state, putting additional coins in has no effect, however, a customer pushing through the arms, giving a push input, shifts the state back to Locked. Each state is represented by a node, edges show the transitions from one state to another. Each arrow is labeled with the input that triggers that transition, an input that doesnt cause a change of state is represented by a circular arrow returning to the original state. The arrow into the Locked node from the dot indicates it is the initial state
16.
Probability theory
–
Probability theory is the branch of mathematics concerned with probability, the analysis of random phenomena. It is not possible to predict precisely results of random events, two representative mathematical results describing such patterns are the law of large numbers and the central limit theorem. As a mathematical foundation for statistics, probability theory is essential to human activities that involve quantitative analysis of large sets of data. Methods of probability theory also apply to descriptions of complex systems given only partial knowledge of their state, a great discovery of twentieth century physics was the probabilistic nature of physical phenomena at atomic scales, described in quantum mechanics. Christiaan Huygens published a book on the subject in 1657 and in the 19th century, initially, probability theory mainly considered discrete events, and its methods were mainly combinatorial. Eventually, analytical considerations compelled the incorporation of continuous variables into the theory and this culminated in modern probability theory, on foundations laid by Andrey Nikolaevich Kolmogorov. Kolmogorov combined the notion of space, introduced by Richard von Mises. This became the mostly undisputed axiomatic basis for modern probability theory, most introductions to probability theory treat discrete probability distributions and continuous probability distributions separately. The more mathematically advanced measure theory-based treatment of probability covers the discrete, continuous, consider an experiment that can produce a number of outcomes. The set of all outcomes is called the space of the experiment. The power set of the space is formed by considering all different collections of possible results. For example, rolling an honest die produces one of six possible results, one collection of possible results corresponds to getting an odd number. Thus, the subset is an element of the set of the sample space of die rolls. In this case, is the event that the die falls on some odd number, If the results that actually occur fall in a given event, that event is said to have occurred. Probability is a way of assigning every event a value between zero and one, with the requirement that the event made up of all possible results be assigned a value of one, the probability that any one of the events, or will occur is 5/6. This is the same as saying that the probability of event is 5/6 and this event encompasses the possibility of any number except five being rolled. The mutually exclusive event has a probability of 1/6, and the event has a probability of 1, discrete probability theory deals with events that occur in countable sample spaces. Modern definition, The modern definition starts with a finite or countable set called the sample space, which relates to the set of all possible outcomes in classical sense, denoted by Ω
17.
Markov chain
–
In probability theory and related fields, a Markov process, named after the Russian mathematician Andrey Markov, is a stochastic process that satisfies the Markov property. e. Conditional on the present state of the system, its future, a Markov chain is a type of Markov process that has either discrete state space or discrete index set, but the precise definition of a Markov chain varies. Andrey Markov studied Markov processes in the early 20th century, publishing his first paper on the topic in 1906, random walks on the integers and the Gamblers ruin problem are examples of Markov processes and were studied hundreds of years earlier. These two processes are Markov processes in time, while random walks on the integers and the Gamblers ruin problem are examples of Markov processes in discrete time. The algorithm known as PageRank, which was proposed for the internet search engine Google, is based on a Markov process. The adjective Markovian is used to something that is related to a Markov process. A Markov chain is a process with the Markov property. The term Markov chain refers to the sequence of variables such a process moves through. It can thus be used for describing systems that follow a chain of linked events, the systems state space and time parameter index need to be specified. In addition, there are extensions of Markov processes that are referred to as such. Moreover, the index need not necessarily be real-valued, like with the state space. Notice that the state space continuous-time Markov chain is general to such a degree that it has no designated term. While the time parameter is usually discrete, the space of a Markov chain does not have any generally agreed-on restrictions. However, many applications of Markov chains employ finite or countably infinite state spaces, besides time-index and state-space parameters, there are many other variations, extensions and generalizations. For simplicity, most of this article concentrates on the discrete-time, discrete state-space case, the changes of state of the system are called transitions. The probabilities associated with state changes are called transition probabilities. The process is characterized by a space, a transition matrix describing the probabilities of particular transitions. By convention, we assume all possible states and transitions have been included in the definition of the process, so there is always a next state, and the process does not terminate
18.
Applied mathematics
–
Applied mathematics is a branch of mathematics that deals with mathematical methods that find use in science, engineering, business, computer science, and industry. Thus, applied mathematics is a combination of science and specialized knowledge. The term applied mathematics also describes the professional specialty in which work on practical problems by formulating and studying mathematical models. The activity of applied mathematics is thus connected with research in pure mathematics. Historically, applied mathematics consisted principally of applied analysis, most notably differential equations, approximation theory, quantitative finance is now taught in mathematics departments across universities and mathematical finance is considered a full branch of applied mathematics. Engineering and computer science departments have made use of applied mathematics. Today, the applied mathematics is used in a broader sense. It includes the areas noted above as well as other areas that have become increasingly important in applications. Even fields such as number theory that are part of mathematics are now important in applications. There is no consensus as to what the various branches of applied mathematics are, such categorizations are made difficult by the way mathematics and science change over time, and also by the way universities organize departments, courses, and degrees. Many mathematicians distinguish between applied mathematics, which is concerned with methods, and the applications of mathematics within science. Mathematicians such as Poincaré and Arnold deny the existence of applied mathematics, similarly, non-mathematicians blend applied mathematics and applications of mathematics. The use and development of mathematics to industrial problems is also called industrial mathematics. Historically, mathematics was most important in the sciences and engineering. Academic institutions are not consistent in the way they group and label courses, programs, at some schools, there is a single mathematics department, whereas others have separate departments for Applied Mathematics and Mathematics. It is very common for Statistics departments to be separated at schools with graduate programs, many applied mathematics programs consist of primarily cross-listed courses and jointly appointed faculty in departments representing applications. Some Ph. D. programs in applied mathematics require little or no coursework outside of mathematics, in some respects this difference reflects the distinction between application of mathematics and applied mathematics. Research universities dividing their mathematics department into pure and applied sections include MIT, brigham Young University also has an Applied and Computational Emphasis, a program that allows student to graduate with a Mathematics degree, with an emphasis in Applied Math
19.
Linear time-invariant theory
–
It investigates the response of a linear and time-invariant system to an arbitrary input signal. Thus, these systems are called linear translation-invariant to give the theory the most general reach. In the case of generic discrete-time systems, linear shift-invariant is the corresponding term, a good example of LTI systems are electrical circuits that can be made up of resistors, capacitors, and inductors. The defining properties of any LTI system are linearity and time invariance. It follows that this can be extended to a number of terms. In particular, where c ω and x ω are scalars, time invariance means that whether we apply an input to the system now or T seconds from now, the output will be identical except for a time delay of T seconds. That is, if the output due to input x is y, hence, the system is time invariant because the output does not depend on the particular time the input is applied. The fundamental result in LTI system theory is that any LTI system can be characterized entirely by a function called the systems impulse response. The output of the system is simply the convolution of the input to the system with the impulse response. This method of analysis is called the time domain point-of-view. The same result is true of discrete-time linear shift-invariant systems in which signals are discrete-time samples, equivalently, any LTI system can be characterized in the frequency domain by the systems transfer function, which is the Laplace transform of the systems impulse response. As a result of the properties of these transforms, the output of the system in the domain is the product of the transfer function. In other words, convolution in the domain is equivalent to multiplication in the frequency domain. For all LTI systems, the eigenfunctions, and the functions of the transforms, are complex exponentials. The ratio B / A is the function at frequency s. LTI systems cannot produce frequency components that are not in the input, LTI system theory is good at describing many important systems. Most LTI systems are considered easy to analyze, at least compared to the time-varying and/or nonlinear case, any system that can be modeled as a linear homogeneous differential equation with constant coefficients is an LTI system. Examples of such systems are electrical circuits made up of resistors, inductors, ideal spring–mass–damper systems are also LTI systems, and are mathematically equivalent to RLC circuits
20.
Band (mathematics)
–
In mathematics, a band is a semigroup in which every element is idempotent. Bands were first studied and named by A. H. Clifford, a class of bands forms a variety if it is closed under formation of subsemigroups, homomorphic images and direct product. Each variety of bands can be defined by a single defining identity, semilattices are exactly commutative bands, that is, they are the bands satisfying the equation xy = yx for all x and y. A left zero band is a band satisfying the equation xy = x, symmetrically, a right zero band is one satisfying xy = y, so that the Cayley table has constant columns. There is a classification of rectangular bands. Left zero and right zero bands are bands, and in fact every rectangular band is isomorphic to a direct product of a left zero band. All rectangular bands of prime order are zero bands, either left or right, a rectangular band is said to be purely rectangular if it is not a left or right zero band. Note that if the set I is empty in the result, the rectangular band I × J is independent of J. This is why the above result only gives an equivalence between nonempty rectangular bands and pairs of nonempty sets, a normal band is a band S satisfying zxyz = zyxz for all x, y, and z ∈ S. This is the equation used to define medial magmas, and so a normal band may also be called a medial band. Left-regular bands thus show up naturally in the study of posets, a right-regular band is a band S satisfying xyx = yx for all x, y ∈ S Any right-regular band becomes a left-regular band using the opposite product. Indeed, every variety of bands has a version, this gives rise to the reflection symmetry in the figure below. The complete structure of this lattice is known, in particular, it is countable, complete, the sublattice consisting of the 13 varieties of regular bands is shown in the figure. The varieties of bands, semilattices, and right-zero bands are the three atoms of this lattice. Note that each variety of bands shown in the figure is defined by just one identity and this is not a coincidence, in fact, every variety of bands can be defined by a single identity. P. Varieties of idempotent semigroups, Algebra and Logic,9, 153–164, brown, Ken, Semigroups, rings, and Markov chains, J. Theoret. Clifford, Alfred Hoblitzelle, Bands of semigroups, Proceedings of the American Mathematical Society,5, 499–504, doi,10. 1090/S0002-9939-1954-0062119-9, Clifford, Alfred Hoblitzelle, Preston, Gordon Bamford, The Algebraic Theory of Semigroups, Moscow, Mir. Fennemore, Charles, All varieties of bands, Semigroup Forum,1, 172–179, the lattice of equational classes of idempotent semigroups, Journal of Algebra,15, 195–224, doi,10. 1016/0021-869390073-6
21.
Lie group
–
In mathematics, a Lie group /ˈliː/ is a group that is also a differentiable manifold, with the property that the group operations are compatible with the smooth structure. Lie groups are named after Sophus Lie, who laid the foundations of the theory of transformation groups. The term groupes de Lie first appeared in French in 1893 in the thesis of Lie’s student Arthur Tresse, an extension of Galois theory to the case of continuous symmetry groups was one of Lies principal motivations. Lie groups are smooth manifolds and as such can be studied using differential calculus. Lie groups play an role in modern geometry, on several different levels. Felix Klein argued in his Erlangen program that one can consider various geometries by specifying an appropriate transformation group that leaves certain geometric properties invariant and this idea later led to the notion of a G-structure, where G is a Lie group of local symmetries of a manifold. On a global level, whenever a Lie group acts on an object, such as a Riemannian or a symplectic manifold. The presence of continuous symmetries expressed via a Lie group action on a manifold places strong constraints on its geometry, Linear actions of Lie groups are especially important, and are studied in representation theory. This insight opened new possibilities in pure algebra, by providing a uniform construction for most finite simple groups, a real Lie group is a group that is also a finite-dimensional real smooth manifold, in which the group operations of multiplication and inversion are smooth maps. Smoothness of the group multiplication μ, G × G → G μ = x y means that μ is a mapping of the product manifold G×G into G. These two requirements can be combined to the requirement that the mapping ↦ x −1 y be a smooth mapping of the product manifold into G. The 2×2 real invertible matrices form a group under multiplication, denoted by GL or by GL2 and this is a four-dimensional noncompact real Lie group. This group is disconnected, it has two connected components corresponding to the positive and negative values of the determinant, the rotation matrices form a subgroup of GL, denoted by SO. It is a Lie group in its own right, specifically, using the rotation angle φ as a parameter, this group can be parametrized as follows, SO =. Addition of the angles corresponds to multiplication of the elements of SO, thus both multiplication and inversion are differentiable maps. The orthogonal group also forms an example of a Lie group. All of the examples of Lie groups fall within the class of classical groups. Hilberts fifth problem asked whether replacing differentiable manifolds with topological or analytic ones can yield new examples, if the underlying manifold is allowed to be infinite-dimensional, then one arrives at the notion of an infinite-dimensional Lie group
22.
Group theory
–
In mathematics and abstract algebra, group theory studies the algebraic structures known as groups. Groups recur throughout mathematics, and the methods of group theory have influenced many parts of algebra, linear algebraic groups and Lie groups are two branches of group theory that have experienced advances and have become subject areas in their own right. Various physical systems, such as crystals and the hydrogen atom, thus group theory and the closely related representation theory have many important applications in physics, chemistry, and materials science. Group theory is central to public key cryptography. The first class of groups to undergo a systematic study was permutation groups, given any set X and a collection G of bijections of X into itself that is closed under compositions and inverses, G is a group acting on X. If X consists of n elements and G consists of all permutations, G is the symmetric group Sn, in general, an early construction due to Cayley exhibited any group as a permutation group, acting on itself by means of the left regular representation. In many cases, the structure of a group can be studied using the properties of its action on the corresponding set. For example, in this way one proves that for n ≥5 and this fact plays a key role in the impossibility of solving a general algebraic equation of degree n ≥5 in radicals. The next important class of groups is given by matrix groups, here G is a set consisting of invertible matrices of given order n over a field K that is closed under the products and inverses. Such a group acts on the vector space Kn by linear transformations. In the case of groups, X is a set, for matrix groups. The concept of a group is closely related with the concept of a symmetry group. The theory of groups forms a bridge connecting group theory with differential geometry. A long line of research, originating with Lie and Klein, the groups themselves may be discrete or continuous. Most groups considered in the first stage of the development of group theory were concrete, having been realized through numbers, permutations, or matrices. It was not until the nineteenth century that the idea of an abstract group as a set with operations satisfying a certain system of axioms began to take hold. A typical way of specifying an abstract group is through a presentation by generators and relations, a significant source of abstract groups is given by the construction of a factor group, or quotient group, G/H, of a group G by a normal subgroup H. Class groups of algebraic number fields were among the earliest examples of factor groups, of much interest in number theory
23.
Ring (mathematics)
–
In mathematics, a ring is one of the fundamental algebraic structures used in abstract algebra. It consists of a set equipped with two operations that generalize the arithmetic operations of addition and multiplication. Through this generalization, theorems from arithmetic are extended to non-numerical objects such as polynomials, series, matrices, the conceptualization of rings started in the 1870s and completed in the 1920s. Key contributors include Dedekind, Hilbert, Fraenkel, and Noether, rings were first formalized as a generalization of Dedekind domains that occur in number theory, and of polynomial rings and rings of invariants that occur in algebraic geometry and invariant theory. Afterward, they proved to be useful in other branches of mathematics such as geometry. A ring is a group with a second binary operation that is associative, is distributive over the abelian group operation. By extension from the integers, the group operation is called addition. Whether a ring is commutative or not has profound implications on its behavior as an abstract object, as a result, commutative ring theory, commonly known as commutative algebra, is a key topic in ring theory. Its development has greatly influenced by problems and ideas occurring naturally in algebraic number theory. The most familiar example of a ring is the set of all integers, Z, −5, −4, −3, −2, −1,0,1,2,3,4,5. The familiar properties for addition and multiplication of integers serve as a model for the axioms for rings, a ring is a set R equipped with two binary operations + and · satisfying the following three sets of axioms, called the ring axioms 1. R is a group under addition, meaning that, + c = a + for all a, b, c in R. a + b = b + a for all a, b in R. There is an element 0 in R such that a +0 = a for all a in R, for each a in R there exists −a in R such that a + =0. R is a monoid under multiplication, meaning that, · c = a · for all a, b, c in R. There is an element 1 in R such that a ·1 = a and 1 · a = a for all a in R.3. Multiplication is distributive with respect to addition, a ⋅ = + for all a, b, c in R. · a = + for all a, b, c in R. As explained in § History below, many follow a alternative convention in which a ring is not defined to have a multiplicative identity. This article adopts the convention that, unless stated, a ring is assumed to have such an identity
24.
Commutative ring
–
In ring theory, a branch of abstract algebra, a commutative ring is a ring in which the multiplication operation is commutative. The study of rings is called commutative algebra. Complementarily, noncommutative algebra is the study of noncommutative rings where multiplication is not or is not required to be commutative. e, operations combining any two elements of the ring to a third. They are called addition and multiplication and commonly denoted by + and ⋅, e. g. a + b, the identity elements for addition and multiplication are denoted 0 and 1, respectively. If the multiplication is commutative, i. e. a ⋅ b = b ⋅ a, in the remainder of this article, all rings will be commutative, unless explicitly stated otherwise. An important example, and in some sense crucial, is the ring of integers Z with the two operations of addition and multiplication, as the multiplication of integers is a commutative operation, this is a commutative ring. It is usually denoted Z as an abbreviation of the German word Zahlen, a field is a commutative ring where every non-zero element a is invertible, i. e. has a multiplicative inverse b such that a ⋅ b =1. Therefore, by definition, any field is a commutative ring, the rational, real and complex numbers form fields. An example is the set of matrices of divided differences with respect to a set of nodes. If R is a commutative ring, then the set of all polynomials in the variable X whose coefficients are in R forms the polynomial ring. The same holds true for several variables, if V is some topological space, for example a subset of some Rn, real- or complex-valued continuous functions on V form a commutative ring. The same is true for differentiable or holomorphic functions, when the two concepts are defined, such as for V a complex manifold, in contrast to fields, where every nonzero element is multiplicatively invertible, the theory of rings is more complicated. There are several notions to cope with that situation, first, an element a of ring R is called a unit if it possesses a multiplicative inverse. Another particular type of element is the zero divisors, i. e. a non-zero element a such that there exists an element b of the ring such that ab =0. If R possesses no zero divisors, it is called an integral domain since it resembles the integers in some ways. Many of the following notions also exist for not necessarily commutative rings, for example, all ideals in a commutative ring are automatically two-sided, which simplifies the situation considerably. Given any subset F = j ∈ J of R, the ideal generated by F is the smallest ideal that contains F. Equivalently, an ideal generated by one element is called a principal ideal. A ring all of whose ideals are principal is called a principal ideal ring, any ring has two ideals, namely the zero ideal and R, the whole ring
25.
Field (mathematics)
–
In mathematics, a field is a set on which are defined addition, subtraction, multiplication, and division, which behave as they do when applied to rational and real numbers. A field is thus an algebraic structure, which is widely used in algebra, number theory. The best known fields are the field of numbers. In addition, the field of numbers is widely used, not only in mathematics. Finite fields are used in most cryptographic protocols used for computer security, any field may be used as the scalars for a vector space, which is the standard general context for linear algebra. Formally, a field is a set together with two operations the addition and the multiplication, which have the properties, called axioms of fields. An operation is a mapping that associates an element of the set to every pair of its elements, the result of the addition of a and b is called the sum of a and b and denoted a + b. Similarly, the result of the multiplication of a and b is called the product of a and b, associativity of addition and multiplication For all a, b and c in F, one has a + = + c and a · = · c. Commutativity of addition and multiplication For all a and b in F one has a + b = b + a and a · b = b · a. Existence of additive and multiplicative identity elements There exists an element 0 in F, called the identity, such that for all a in F. There is an element 1, different from 0 and called the identity, such that for all a in F. Existence of additive inverses and multiplicative inverses For every a in F, there exists an element in F, denoted −a, such that a + =0. For every a ≠0 in F, there exists an element in F, denoted a−1, 1/a, or 1/a, distributivity of multiplication over addition For all a, b and c in F, one has a · = +. The elements 0 and 1 being required to be distinct, a field has, at least, for every a in F, one has − a = ⋅ a. Thus, the inverse of every element is known as soon as one knows the additive inverse of 1. A subtraction and a division are defined in every field by a − b = a +, a subfield E of a field F is a subset of F that contains 1, and is closed under addition, multiplication, additive inverse and multiplicative inverse of a nonzero element. It is straightforward to verify that a subfield is indeed a field, two groups are associated to every field. The field itself is a group under addition, when considering this group structure rather the field structure, one talks of the additive group of the field