1.
Boolean algebra
–
In mathematics and mathematical logic, Boolean algebra is the branch of algebra in which the values of the variables are the truth values true and false, usually denoted 1 and 0 respectively. It is thus a formalism for describing logical relations in the way that ordinary algebra describes numeric relations. Boolean algebra was introduced by George Boole in his first book The Mathematical Analysis of Logic, according to Huntington, the term Boolean algebra was first suggested by Sheffer in 1913. Boolean algebra has been fundamental in the development of digital electronics and it is also used in set theory and statistics. Booles algebra predated the modern developments in algebra and mathematical logic. In an abstract setting, Boolean algebra was perfected in the late 19th century by Jevons, Schröder, Huntington, in fact, M. H. Stone proved in 1936 that every Boolean algebra is isomorphic to a field of sets. Shannon already had at his disposal the abstract mathematical apparatus, thus he cast his switching algebra as the two-element Boolean algebra, in circuit engineering settings today, there is little need to consider other Boolean algebras, thus switching algebra and Boolean algebra are often used interchangeably. Efficient implementation of Boolean functions is a problem in the design of combinational logic circuits. Logic sentences that can be expressed in classical propositional calculus have an equivalent expression in Boolean algebra, thus, Boolean logic is sometimes used to denote propositional calculus performed in this way. Boolean algebra is not sufficient to capture logic formulas using quantifiers, the closely related model of computation known as a Boolean circuit relates time complexity to circuit complexity. Whereas in elementary algebra expressions denote mainly numbers, in Boolean algebra they denote the truth values false and these values are represented with the bits, namely 0 and 1. Addition and multiplication then play the Boolean roles of XOR and AND respectively, Boolean algebra also deals with functions which have their values in the set. A sequence of bits is a commonly used such function, another common example is the subsets of a set E, to a subset F of E is associated the indicator function that takes the value 1 on F and 0 outside F. The most general example is the elements of a Boolean algebra, as with elementary algebra, the purely equational part of the theory may be developed without considering explicit values for the variables. The basic operations of Boolean calculus are as follows, AND, denoted x∧y, satisfies x∧y =1 if x = y =1 and x∧y =0 otherwise. OR, denoted x∨y, satisfies x∨y =0 if x = y =0, NOT, denoted ¬x, satisfies ¬x =0 if x =1 and ¬x =1 if x =0. Alternatively the values of x∧y, x∨y, and ¬x can be expressed by tabulating their values with truth tables as follows, the first operation, x → y, or Cxy, is called material implication. If x is then the value of x → y is taken to be that of y
2.
Abstract algebra
–
In algebra, which is a broad division of mathematics, abstract algebra is the study of algebraic structures. Algebraic structures include groups, rings, fields, modules, vector spaces, lattices, the term abstract algebra was coined in the early 20th century to distinguish this area of study from the other parts of algebra. Algebraic structures, with their homomorphisms, form mathematical categories. Category theory is a formalism that allows a way for expressing properties. Universal algebra is a subject that studies types of algebraic structures as single objects. For example, the structure of groups is an object in universal algebra. As in other parts of mathematics, concrete problems and examples have played important roles in the development of abstract algebra, through the end of the nineteenth century, many – perhaps most – of these problems were in some way related to the theory of algebraic equations. Numerous textbooks in abstract algebra start with definitions of various algebraic structures. This creates an impression that in algebra axioms had come first and then served as a motivation. The true order of development was almost exactly the opposite. For example, the numbers of the nineteenth century had kinematic and physical motivations. An archetypical example of this progressive synthesis can be seen in the history of group theory, there were several threads in the early development of group theory, in modern language loosely corresponding to number theory, theory of equations, and geometry. Leonhard Euler considered algebraic operations on numbers modulo an integer, modular arithmetic, lagranges goal was to understand why equations of third and fourth degree admit formulae for solutions, and he identified as key objects permutations of the roots. An important novel step taken by Lagrange in this paper was the view of the roots, i. e. as symbols. However, he did not consider composition of permutations, serendipitously, the first edition of Edward Warings Meditationes Algebraicae appeared in the same year, with an expanded version published in 1782. Waring proved the theorem on symmetric functions, and specially considered the relation between the roots of a quartic equation and its resolvent cubic. Kronecker claimed in 1888 that the study of modern algebra began with this first paper of Vandermonde, cauchy states quite clearly that Vandermonde had priority over Lagrange for this remarkable idea, which eventually led to the study of group theory. Paolo Ruffini was the first person to develop the theory of permutation groups and his goal was to establish the impossibility of an algebraic solution to a general algebraic equation of degree greater than four
3.
Complemented lattice
–
In the mathematical discipline of order theory, a complemented lattice is a bounded lattice, in which every element a has a complement, i. e. an element b satisfying a ∨ b =1 and a ∧ b =0. A relatively complemented lattice is a such that every interval. An orthocomplementation on a lattice is an involution which is order-reversing. An orthocomplemented lattice satisfying a weak form of the law is called an orthomodular lattice. In distributive lattices, complements are unique, every complemented distributive lattice has a unique orthocomplementation and is in fact a Boolean algebra. A complemented lattice is a lattice, in which every element a has a complement, i. e. an element b such that a ∨ b =1. In general an element may have more than one complement, however, in a distributive lattice every element will have at most one complement. In other words, a complemented lattice is characterized by the property that for every element a in an interval there is an element b such that a ∨ b = d. Such an element b is called a complement of a relative to the interval, a distributive lattice is complemented if and only if it is bounded and relatively complemented. The lattice of subspaces of a vector space provide an example of a lattice that is not, in general. Order-reversing if a ≤ b then b⊥ ≤ a⊥, an orthocomplemented lattice or ortholattice is a bounded lattice which is equipped with an orthocomplementation. The lattice of subspaces of a product space, and the orthogonal complement operation, provides an example of an orthocomplemented lattice that is not, in general. Some complemented lattices Boolean algebras are a case of orthocomplemented lattices. The ortholattices are most often used in logic, where the closed subspaces of a separable Hilbert space represent quantum propositions. Orthocomplemented lattices, like Boolean algebras, satisfy de Morgans laws, a lattice is called modular if for all elements a, b and c the implication if a ≤ c, then a ∨ = ∧ c holds. This is weaker than distributivity, e. g. the above-shown lattice M3 is modular, a natural further weakening of this condition for orthocomplemented lattices, necessary for applications in quantum logic, is to require it only in the special case b = a⊥. An orthomodular lattice is defined as an orthocomplemented lattice such that for any two elements the implication if a ≤ c, then a ∨ = c holds. Lattices of this form are of importance for the study of quantum logic
4.
Distributive lattice
–
In mathematics, a distributive lattice is a lattice in which the operations of join and meet distribute over each other. The prototypical examples of structures are collections of sets for which the lattice operations can be given by set union and intersection. Indeed, these lattices of sets describe the scenery completely, every lattice is – up to isomorphism – given as such a lattice of sets. As in the case of arbitrary lattices, one can choose to consider a distributive lattice L either as a structure of theory or of universal algebra. Both views and their correspondence are discussed in the article on lattices. In the present situation, the description appears to be more convenient, A lattice is distributive if the following additional identity holds for all x, y. Viewing lattices as partially ordered sets, this says that the meet operation preserves non-empty finite joins and it is a basic fact of lattice theory that the above condition is equivalent to its dual, x ∨ = ∧ for all x, y, and z in L. In every lattice, defining p≤q as usual to mean p∧q=p, a lattice is distributive if one of the converse inequalities holds, too. More information on the relationship of this condition to other distributivity conditions of order theory can be found in the article on distributivity. A morphism of lattices is just a lattice homomorphism as given in the article on lattices. Because such a morphism of lattices preserves the structure, it will consequently also preserve the distributivity. Distributive lattices are ubiquitous but also rather specific structures, as already mentioned the main example for distributive lattices are lattices of sets, where join and meet are given by the usual set-theoretic operations. Further examples include, The Lindenbaum algebra of most logics that support conjunction and disjunction is a lattice, i. e. and distributes over or. Every Boolean algebra is a distributive lattice, every Heyting algebra is a distributive lattice. Especially this includes all locales and hence all open set lattices of topological spaces, also note that Heyting algebras can be viewed as Lindenbaum algebras of intuitionistic logic, which makes them a special case of the above example. Every totally ordered set is a lattice with max as join and min as meet. The natural numbers form a lattice by taking the greatest common divisor as meet. This lattice also has a least element, namely 1, which serves as the identity element for joins
5.
Set (mathematics)
–
In mathematics, a set is a well-defined collection of distinct objects, considered as an object in its own right. For example, the numbers 2,4, and 6 are distinct objects when considered separately, Sets are one of the most fundamental concepts in mathematics. Developed at the end of the 19th century, set theory is now a part of mathematics. In mathematics education, elementary topics such as Venn diagrams are taught at a young age, the German word Menge, rendered as set in English, was coined by Bernard Bolzano in his work The Paradoxes of the Infinite. A set is a collection of distinct objects. The objects that make up a set can be anything, numbers, people, letters of the alphabet, other sets, Sets are conventionally denoted with capital letters. Sets A and B are equal if and only if they have precisely the same elements. Cantors definition turned out to be inadequate, instead, the notion of a set is taken as a notion in axiomatic set theory. There are two ways of describing, or specifying the members of, a set, one way is by intensional definition, using a rule or semantic description, A is the set whose members are the first four positive integers. B is the set of colors of the French flag, the second way is by extension – that is, listing each member of the set. An extensional definition is denoted by enclosing the list of members in curly brackets, one often has the choice of specifying a set either intensionally or extensionally. In the examples above, for instance, A = C and B = D, there are two important points to note about sets. First, in a definition, a set member can be listed two or more times, for example. However, per extensionality, two definitions of sets which differ only in one of the definitions lists set members multiple times, define, in fact. Hence, the set is identical to the set. The second important point is that the order in which the elements of a set are listed is irrelevant and we can illustrate these two important points with an example, = =. For sets with many elements, the enumeration of members can be abbreviated, for instance, the set of the first thousand positive integers may be specified extensionally as, where the ellipsis indicates that the list continues in the obvious way. Ellipses may also be used where sets have infinitely many members, thus the set of positive even numbers can be written as
6.
Logic
–
Logic, originally meaning the word or what is spoken, is generally held to consist of the systematic study of the form of arguments. A valid argument is one where there is a relation of logical support between the assumptions of the argument and its conclusion. Historically, logic has been studied in philosophy and mathematics, and recently logic has been studied in science, linguistics, psychology. The concept of form is central to logic. The validity of an argument is determined by its logical form, traditional Aristotelian syllogistic logic and modern symbolic logic are examples of formal logic. Informal logic is the study of natural language arguments, the study of fallacies is an important branch of informal logic. Since much informal argument is not strictly speaking deductive, on some conceptions of logic, formal logic is the study of inference with purely formal content. An inference possesses a purely formal content if it can be expressed as an application of a wholly abstract rule, that is. The works of Aristotle contain the earliest known study of logic. Modern formal logic follows and expands on Aristotle, in many definitions of logic, logical inference and inference with purely formal content are the same. This does not render the notion of informal logic vacuous, because no formal logic captures all of the nuances of natural language, Symbolic logic is the study of symbolic abstractions that capture the formal features of logical inference. Symbolic logic is divided into two main branches, propositional logic and predicate logic. Mathematical logic is an extension of logic into other areas, in particular to the study of model theory, proof theory, set theory. Logic is generally considered formal when it analyzes and represents the form of any valid argument type, the form of an argument is displayed by representing its sentences in the formal grammar and symbolism of a logical language to make its content usable in formal inference. Simply put, formalising simply means translating English sentences into the language of logic and this is called showing the logical form of the argument. It is necessary because indicative sentences of ordinary language show a variety of form. Second, certain parts of the sentence must be replaced with schematic letters, thus, for example, the expression all Ps are Qs shows the logical form common to the sentences all men are mortals, all cats are carnivores, all Greeks are philosophers, and so on. The schema can further be condensed into the formula A, where the letter A indicates the judgement all - are -, the importance of form was recognised from ancient times
7.
Power set
–
In mathematics, the power set of any set S is the set of all subsets of S, including the empty set and S itself. The power set of a set S is variously denoted as P, ℘, P, ℙ, or, in axiomatic set theory, the existence of the power set of any set is postulated by the axiom of power set. Any subset of P is called a family of sets over S, if S is the set, then the subsets of S are, and hence the power set of S is. If S is a set with |S| = n elements. This fact, which is the motivation for the notation 2S, may be demonstrated simply as follows, First and we write any subset of S in the format where γi,1 ≤ i ≤ n, can take the value of 0 or 1. If γi =1, the element of S is in the subset, otherwise. Clearly the number of subsets that can be constructed this way is 2n as γi ∈. Cantors diagonal argument shows that the set of a set always has strictly higher cardinality than the set itself. In particular, Cantors theorem shows that the set of a countably infinite set is uncountably infinite. The power set of the set of numbers can be put in a one-to-one correspondence with the set of real numbers. The power set of a set S, together with the operations of union, intersection, in fact, one can show that any finite Boolean algebra is isomorphic to the Boolean algebra of the power set of a finite set. For infinite Boolean algebras this is no true, but every infinite Boolean algebra can be represented as a subalgebra of a power set Boolean algebra. The power set of a set S forms a group when considered with the operation of symmetric difference. It can hence be shown that the power set considered together with both of these forms a Boolean ring. In set theory, XY is the set of all functions from Y to X, as 2 can be defined as, 2S is the set of all functions from S to. Hence 2S and P could be considered identical set-theoretically and this notion can be applied to the example above in which S = to see the isomorphism with the binary numbers from 0 to 2n −1 with n being the number of elements in the set. In S, a 1 in the corresponding to the location in the set indicates the presence of the element. The number of subsets with k elements in the set of a set with n elements is given by the number of combinations, C