1.
Boolean algebra
–
In mathematics and mathematical logic, Boolean algebra is the branch of algebra in which the values of the variables are the truth values true and false, usually denoted 1 and 0 respectively. It is thus a formalism for describing logical relations in the way that ordinary algebra describes numeric relations. Boolean algebra was introduced by George Boole in his first book The Mathematical Analysis of Logic, according to Huntington, the term Boolean algebra was first suggested by Sheffer in 1913. Boolean algebra has been fundamental in the development of digital electronics and it is also used in set theory and statistics. Booles algebra predated the modern developments in algebra and mathematical logic. In an abstract setting, Boolean algebra was perfected in the late 19th century by Jevons, Schröder, Huntington, in fact, M. H. Stone proved in 1936 that every Boolean algebra is isomorphic to a field of sets. Shannon already had at his disposal the abstract mathematical apparatus, thus he cast his switching algebra as the two-element Boolean algebra, in circuit engineering settings today, there is little need to consider other Boolean algebras, thus switching algebra and Boolean algebra are often used interchangeably. Efficient implementation of Boolean functions is a problem in the design of combinational logic circuits. Logic sentences that can be expressed in classical propositional calculus have an equivalent expression in Boolean algebra, thus, Boolean logic is sometimes used to denote propositional calculus performed in this way. Boolean algebra is not sufficient to capture logic formulas using quantifiers, the closely related model of computation known as a Boolean circuit relates time complexity to circuit complexity. Whereas in elementary algebra expressions denote mainly numbers, in Boolean algebra they denote the truth values false and these values are represented with the bits, namely 0 and 1. Addition and multiplication then play the Boolean roles of XOR and AND respectively, Boolean algebra also deals with functions which have their values in the set. A sequence of bits is a commonly used such function, another common example is the subsets of a set E, to a subset F of E is associated the indicator function that takes the value 1 on F and 0 outside F. The most general example is the elements of a Boolean algebra, as with elementary algebra, the purely equational part of the theory may be developed without considering explicit values for the variables. The basic operations of Boolean calculus are as follows, AND, denoted x∧y, satisfies x∧y =1 if x = y =1 and x∧y =0 otherwise. OR, denoted x∨y, satisfies x∨y =0 if x = y =0, NOT, denoted ¬x, satisfies ¬x =0 if x =1 and ¬x =1 if x =0. Alternatively the values of x∧y, x∨y, and ¬x can be expressed by tabulating their values with truth tables as follows, the first operation, x → y, or Cxy, is called material implication. If x is then the value of x → y is taken to be that of y
2.
Abstract algebra
–
In algebra, which is a broad division of mathematics, abstract algebra is the study of algebraic structures. Algebraic structures include groups, rings, fields, modules, vector spaces, lattices, the term abstract algebra was coined in the early 20th century to distinguish this area of study from the other parts of algebra. Algebraic structures, with their homomorphisms, form mathematical categories. Category theory is a formalism that allows a way for expressing properties. Universal algebra is a subject that studies types of algebraic structures as single objects. For example, the structure of groups is an object in universal algebra. As in other parts of mathematics, concrete problems and examples have played important roles in the development of abstract algebra, through the end of the nineteenth century, many – perhaps most – of these problems were in some way related to the theory of algebraic equations. Numerous textbooks in abstract algebra start with definitions of various algebraic structures. This creates an impression that in algebra axioms had come first and then served as a motivation. The true order of development was almost exactly the opposite. For example, the numbers of the nineteenth century had kinematic and physical motivations. An archetypical example of this progressive synthesis can be seen in the history of group theory, there were several threads in the early development of group theory, in modern language loosely corresponding to number theory, theory of equations, and geometry. Leonhard Euler considered algebraic operations on numbers modulo an integer, modular arithmetic, lagranges goal was to understand why equations of third and fourth degree admit formulae for solutions, and he identified as key objects permutations of the roots. An important novel step taken by Lagrange in this paper was the view of the roots, i. e. as symbols. However, he did not consider composition of permutations, serendipitously, the first edition of Edward Warings Meditationes Algebraicae appeared in the same year, with an expanded version published in 1782. Waring proved the theorem on symmetric functions, and specially considered the relation between the roots of a quartic equation and its resolvent cubic. Kronecker claimed in 1888 that the study of modern algebra began with this first paper of Vandermonde, cauchy states quite clearly that Vandermonde had priority over Lagrange for this remarkable idea, which eventually led to the study of group theory. Paolo Ruffini was the first person to develop the theory of permutation groups and his goal was to establish the impossibility of an algebraic solution to a general algebraic equation of degree greater than four
3.
Complemented lattice
–
In the mathematical discipline of order theory, a complemented lattice is a bounded lattice, in which every element a has a complement, i. e. an element b satisfying a ∨ b =1 and a ∧ b =0. A relatively complemented lattice is a such that every interval. An orthocomplementation on a lattice is an involution which is order-reversing. An orthocomplemented lattice satisfying a weak form of the law is called an orthomodular lattice. In distributive lattices, complements are unique, every complemented distributive lattice has a unique orthocomplementation and is in fact a Boolean algebra. A complemented lattice is a lattice, in which every element a has a complement, i. e. an element b such that a ∨ b =1. In general an element may have more than one complement, however, in a distributive lattice every element will have at most one complement. In other words, a complemented lattice is characterized by the property that for every element a in an interval there is an element b such that a ∨ b = d. Such an element b is called a complement of a relative to the interval, a distributive lattice is complemented if and only if it is bounded and relatively complemented. The lattice of subspaces of a vector space provide an example of a lattice that is not, in general. Order-reversing if a ≤ b then b⊥ ≤ a⊥, an orthocomplemented lattice or ortholattice is a bounded lattice which is equipped with an orthocomplementation. The lattice of subspaces of a product space, and the orthogonal complement operation, provides an example of an orthocomplemented lattice that is not, in general. Some complemented lattices Boolean algebras are a case of orthocomplemented lattices. The ortholattices are most often used in logic, where the closed subspaces of a separable Hilbert space represent quantum propositions. Orthocomplemented lattices, like Boolean algebras, satisfy de Morgans laws, a lattice is called modular if for all elements a, b and c the implication if a ≤ c, then a ∨ = ∧ c holds. This is weaker than distributivity, e. g. the above-shown lattice M3 is modular, a natural further weakening of this condition for orthocomplemented lattices, necessary for applications in quantum logic, is to require it only in the special case b = a⊥. An orthomodular lattice is defined as an orthocomplemented lattice such that for any two elements the implication if a ≤ c, then a ∨ = c holds. Lattices of this form are of importance for the study of quantum logic
4.
Distributive lattice
–
In mathematics, a distributive lattice is a lattice in which the operations of join and meet distribute over each other. The prototypical examples of structures are collections of sets for which the lattice operations can be given by set union and intersection. Indeed, these lattices of sets describe the scenery completely, every lattice is – up to isomorphism – given as such a lattice of sets. As in the case of arbitrary lattices, one can choose to consider a distributive lattice L either as a structure of theory or of universal algebra. Both views and their correspondence are discussed in the article on lattices. In the present situation, the description appears to be more convenient, A lattice is distributive if the following additional identity holds for all x, y. Viewing lattices as partially ordered sets, this says that the meet operation preserves non-empty finite joins and it is a basic fact of lattice theory that the above condition is equivalent to its dual, x ∨ = ∧ for all x, y, and z in L. In every lattice, defining p≤q as usual to mean p∧q=p, a lattice is distributive if one of the converse inequalities holds, too. More information on the relationship of this condition to other distributivity conditions of order theory can be found in the article on distributivity. A morphism of lattices is just a lattice homomorphism as given in the article on lattices. Because such a morphism of lattices preserves the structure, it will consequently also preserve the distributivity. Distributive lattices are ubiquitous but also rather specific structures, as already mentioned the main example for distributive lattices are lattices of sets, where join and meet are given by the usual set-theoretic operations. Further examples include, The Lindenbaum algebra of most logics that support conjunction and disjunction is a lattice, i. e. and distributes over or. Every Boolean algebra is a distributive lattice, every Heyting algebra is a distributive lattice. Especially this includes all locales and hence all open set lattices of topological spaces, also note that Heyting algebras can be viewed as Lindenbaum algebras of intuitionistic logic, which makes them a special case of the above example. Every totally ordered set is a lattice with max as join and min as meet. The natural numbers form a lattice by taking the greatest common divisor as meet. This lattice also has a least element, namely 1, which serves as the identity element for joins
5.
Set (mathematics)
–
In mathematics, a set is a well-defined collection of distinct objects, considered as an object in its own right. For example, the numbers 2,4, and 6 are distinct objects when considered separately, Sets are one of the most fundamental concepts in mathematics. Developed at the end of the 19th century, set theory is now a part of mathematics. In mathematics education, elementary topics such as Venn diagrams are taught at a young age, the German word Menge, rendered as set in English, was coined by Bernard Bolzano in his work The Paradoxes of the Infinite. A set is a collection of distinct objects. The objects that make up a set can be anything, numbers, people, letters of the alphabet, other sets, Sets are conventionally denoted with capital letters. Sets A and B are equal if and only if they have precisely the same elements. Cantors definition turned out to be inadequate, instead, the notion of a set is taken as a notion in axiomatic set theory. There are two ways of describing, or specifying the members of, a set, one way is by intensional definition, using a rule or semantic description, A is the set whose members are the first four positive integers. B is the set of colors of the French flag, the second way is by extension – that is, listing each member of the set. An extensional definition is denoted by enclosing the list of members in curly brackets, one often has the choice of specifying a set either intensionally or extensionally. In the examples above, for instance, A = C and B = D, there are two important points to note about sets. First, in a definition, a set member can be listed two or more times, for example. However, per extensionality, two definitions of sets which differ only in one of the definitions lists set members multiple times, define, in fact. Hence, the set is identical to the set. The second important point is that the order in which the elements of a set are listed is irrelevant and we can illustrate these two important points with an example, = =. For sets with many elements, the enumeration of members can be abbreviated, for instance, the set of the first thousand positive integers may be specified extensionally as, where the ellipsis indicates that the list continues in the obvious way. Ellipses may also be used where sets have infinitely many members, thus the set of positive even numbers can be written as
6.
Logic
–
Logic, originally meaning the word or what is spoken, is generally held to consist of the systematic study of the form of arguments. A valid argument is one where there is a relation of logical support between the assumptions of the argument and its conclusion. Historically, logic has been studied in philosophy and mathematics, and recently logic has been studied in science, linguistics, psychology. The concept of form is central to logic. The validity of an argument is determined by its logical form, traditional Aristotelian syllogistic logic and modern symbolic logic are examples of formal logic. Informal logic is the study of natural language arguments, the study of fallacies is an important branch of informal logic. Since much informal argument is not strictly speaking deductive, on some conceptions of logic, formal logic is the study of inference with purely formal content. An inference possesses a purely formal content if it can be expressed as an application of a wholly abstract rule, that is. The works of Aristotle contain the earliest known study of logic. Modern formal logic follows and expands on Aristotle, in many definitions of logic, logical inference and inference with purely formal content are the same. This does not render the notion of informal logic vacuous, because no formal logic captures all of the nuances of natural language, Symbolic logic is the study of symbolic abstractions that capture the formal features of logical inference. Symbolic logic is divided into two main branches, propositional logic and predicate logic. Mathematical logic is an extension of logic into other areas, in particular to the study of model theory, proof theory, set theory. Logic is generally considered formal when it analyzes and represents the form of any valid argument type, the form of an argument is displayed by representing its sentences in the formal grammar and symbolism of a logical language to make its content usable in formal inference. Simply put, formalising simply means translating English sentences into the language of logic and this is called showing the logical form of the argument. It is necessary because indicative sentences of ordinary language show a variety of form. Second, certain parts of the sentence must be replaced with schematic letters, thus, for example, the expression all Ps are Qs shows the logical form common to the sentences all men are mortals, all cats are carnivores, all Greeks are philosophers, and so on. The schema can further be condensed into the formula A, where the letter A indicates the judgement all - are -, the importance of form was recognised from ancient times
7.
Power set
–
In mathematics, the power set of any set S is the set of all subsets of S, including the empty set and S itself. The power set of a set S is variously denoted as P, ℘, P, ℙ, or, in axiomatic set theory, the existence of the power set of any set is postulated by the axiom of power set. Any subset of P is called a family of sets over S, if S is the set, then the subsets of S are, and hence the power set of S is. If S is a set with |S| = n elements. This fact, which is the motivation for the notation 2S, may be demonstrated simply as follows, First and we write any subset of S in the format where γi,1 ≤ i ≤ n, can take the value of 0 or 1. If γi =1, the element of S is in the subset, otherwise. Clearly the number of subsets that can be constructed this way is 2n as γi ∈. Cantors diagonal argument shows that the set of a set always has strictly higher cardinality than the set itself. In particular, Cantors theorem shows that the set of a countably infinite set is uncountably infinite. The power set of the set of numbers can be put in a one-to-one correspondence with the set of real numbers. The power set of a set S, together with the operations of union, intersection, in fact, one can show that any finite Boolean algebra is isomorphic to the Boolean algebra of the power set of a finite set. For infinite Boolean algebras this is no true, but every infinite Boolean algebra can be represented as a subalgebra of a power set Boolean algebra. The power set of a set S forms a group when considered with the operation of symmetric difference. It can hence be shown that the power set considered together with both of these forms a Boolean ring. In set theory, XY is the set of all functions from Y to X, as 2 can be defined as, 2S is the set of all functions from S to. Hence 2S and P could be considered identical set-theoretically and this notion can be applied to the example above in which S = to see the isomorphism with the binary numbers from 0 to 2n −1 with n being the number of elements in the set. In S, a 1 in the corresponding to the location in the set indicates the presence of the element. The number of subsets with k elements in the set of a set with n elements is given by the number of combinations, C
8.
Truth value
–
In logic and mathematics, a truth value, sometimes called a logical value, is a value indicating the relation of a proposition to truth. In classical logic, with its intended semantics, the values are true and untrue or false. This set of two values is called the Boolean domain. Corresponding semantics of logical connectives are truth functions, whose values are expressed in the form of truth tables, logical biconditional becomes the equality binary relation, and negation becomes a bijection which permutes true and false. Conjunction and disjunction are dual with respect to negation, which is expressed by De Morgans laws, assigning values for propositional variables is referred to as valuation. In intuitionistic logic, and more generally, constructive mathematics, statements are assigned a value only if they can be given a constructive proof. It starts with a set of axioms, and a statement is true if you can build a proof of the statement from those axioms, a statement is false if you can deduce a contradiction from it. This leaves open the possibility of statements that have not yet assigned a truth value. Unproven statements in Intuitionistic logic are not given a truth value. Indeed, you can prove that they have no truth value. There are various ways of interpreting Intuitionistic logic, including the Brouwer–Heyting–Kolmogorov interpretation, see also, Intuitionistic Logic - Semantics. Multi-valued logics allow for more than two values, possibly containing some internal structure. For example, on the interval such structure is a total order. Not all logical systems are truth-valuational in the sense that logical connectives may be interpreted as truth functions, but even non-truth-valuational logics can associate values with logical formulae, as is done in algebraic semantics. The algebraic semantics of intuitionistic logic is given in terms of Heyting algebras, Intuitionistic type theory uses types in the place of truth values. Topos theory uses truth values in a sense, the truth values of a topos are the global elements of the subobject classifier. Having truth values in this sense does not make a logic truth valuational
9.
Boolean algebra (structure)
–
In abstract algebra, a Boolean algebra or Boolean lattice is a complemented distributive lattice. This type of algebraic structure captures essential properties of set operations and logic operations. A Boolean algebra can be seen as a generalization of a power set algebra or a field of sets and it is also a special case of a De Morgan algebra and a Kleene algebra. The term Boolean algebra honors George Boole, a self-educated English mathematician, booles formulation differs from that described above in some important respects. For example, conjunction and disjunction in Boole were not a pair of operations. Boolean algebra emerged in the 1860s, in papers written by William Jevons, the first systematic presentation of Boolean algebra and distributive lattices is owed to the 1890 Vorlesungen of Ernst Schröder. The first extensive treatment of Boolean algebra in English is A. N. Whiteheads 1898 Universal Algebra, Boolean algebra as an axiomatic algebraic structure in the modern axiomatic sense begins with a 1904 paper by Edward V. Huntington. Boolean algebra came of age as serious mathematics with the work of Marshall Stone in the 1930s, and with Garrett Birkhoffs 1940 Lattice Theory. In the 1960s, Paul Cohen, Dana Scott, and others found deep new results in mathematical logic and axiomatic set theory using offshoots of Boolean algebra, namely forcing, a Boolean algebra with only one element is called a trivial Boolean algebra or a degenerate Boolean algebra. It follows from the last three pairs of axioms above, or from the axiom, that a = b ∧ a if. The relation ≤ defined by a ≤ b if these equivalent conditions hold, is an order with least element 0. The meet a ∧ b and the join a ∨ b of two elements coincide with their infimum and supremum, respectively, with respect to ≤, the first four pairs of axioms constitute a definition of a bounded lattice. It follows from the first five pairs of axioms that any complement is unique, the set of axioms is self-dual in the sense that if one exchanges ∨ with ∧ and 0 with 1 in an axiom, the result is again an axiom. Therefore, by applying this operation to a Boolean algebra, one obtains another Boolean algebra with the same elements, furthermore, every possible input-output behavior can be modeled by a suitable Boolean expression. The smallest element 0 is the empty set and the largest element 1 is the set S itself, starting with the propositional calculus with κ sentence symbols, form the Lindenbaum algebra. This construction yields a Boolean algebra and it is in fact the free Boolean algebra on κ generators. A truth assignment in propositional calculus is then a Boolean algebra homomorphism from this algebra to the two-element Boolean algebra, interval algebras are useful in the study of Lindenbaum-Tarski algebras, every countable Boolean algebra is isomorphic to an interval algebra. For any natural n, the set of all positive divisors of n, defining a≤b if a divides b
10.
Boolean ring
–
In mathematics, a Boolean ring R is a ring for which x2 = x for all x in R, such as the ring of integers modulo 2. That is, R consists only of idempotent elements, every Boolean ring gives rise to a Boolean algebra, with ring multiplication corresponding to conjunction or meet ∧, and ring addition to exclusive disjunction or symmetric difference. Boolean rings are named after the founder of Boolean algebra, George Boole, there are at least four different and incompatible systems of notation for Boolean rings and algebras. In commutative algebra the standard notation is to use x + y = ∨ for the sum of x and y. In logic, a notation is to use x ∧ y for the meet and use x ∨ y for the join. In set theory and logic it is common to use x · y for the meet. This use of + is different from the use in ring theory, a rare convention is to use xy for the product and x ⊕ y for the ring sum, in an effort to avoid the ambiguity of +. Historically, the term Boolean ring has been used to mean a Boolean ring possibly without an identity, one example of a Boolean ring is the power set of any set X, where the addition in the ring is symmetric difference, and the multiplication is intersection. As another example, we can consider the set of all finite or cofinite subsets of X, again with symmetric difference. More generally with these operations any field of sets is a Boolean ring, by Stones representation theorem every Boolean ring is isomorphic to a field of sets. Since the join operation ∨ in a Boolean algebra is written additively, it makes sense in this context to denote ring addition by ⊕. Given a Boolean ring R, for x and y in R we can define x ∧ y = xy, x ∨ y = x ⊕ y ⊕ xy and these operations then satisfy all of the axioms for meets, joins, and complements in a Boolean algebra. Thus every Boolean ring becomes a Boolean algebra, similarly, every Boolean algebra becomes a Boolean ring thus, xy = x ∧ y, x ⊕ y = ∧ ¬. If a Boolean ring is translated into a Boolean algebra in this way, and then the Boolean algebra is translated into a ring, the analogous result holds beginning with a Boolean algebra. A map between two Boolean rings is a homomorphism if and only if it is a homomorphism of the corresponding Boolean algebras. Furthermore, a subset of a Boolean ring is an ideal if. The quotient ring of a Boolean ring modulo a ring ideal corresponds to the algebra of the corresponding Boolean algebra modulo the corresponding order ideal. The property x ⊕ x =0 shows that any Boolean ring is an algebra over the field F2 with two elements, in just one way
11.
Logical conjunction
–
In logic and mathematics, and is the truth-functional operator of logical conjunction, the and of a set of operands is true if and only if all of its operands are true. The logical connective that represents this operator is written as ∧ or ⋅. A and B is true only if A is true and B is true, an operand of a conjunction is a conjunct. Related concepts in other fields are, In natural language, the coordinating conjunction, in programming languages, the short-circuit and control structure. And is usually denoted by an operator, in mathematics and logic, ∧ or ×, in electronics, ⋅. In Jan Łukasiewiczs prefix notation for logic, the operator is K, logical conjunction is an operation on two logical values, typically the values of two propositions, that produces a value of true if and only if both of its operands are true. The conjunctive identity is 1, which is to say that AND-ing an expression with 1 will never change the value of the expression. In keeping with the concept of truth, when conjunction is defined as an operator or function of arbitrary arity. The truth table of A ∧ B, As a rule of inference, conjunction introduction is a classically valid, the argument form has two premises, A and B. Intuitively, it permits the inference of their conjunction, therefore, A and B. or in logical operator notation, A, B ⊢ A ∧ B Here is an example of an argument that fits the form conjunction introduction, Bob likes apples. Therefore, Bob likes apples and oranges, Conjunction elimination is another classically valid, simple argument form. Intuitively, it permits the inference from any conjunction of either element of that conjunction, therefore, A. or alternately, A and B. In logical operator notation, A ∧ B ⊢ A. falsehood-preserving, yes When all inputs are false, walsh spectrum, Nonlinearity,1 If using binary values for true and false, then logical conjunction works exactly like normal arithmetic multiplication. Many languages also provide short-circuit control structures corresponding to logical conjunction. Logical conjunction is used for bitwise operations, where 0 corresponds to false and 1 to true,0 AND0 =0,0 AND1 =0,1 AND0 =0,1 AND1 =1. The operation can also be applied to two binary words viewed as bitstrings of length, by taking the bitwise AND of each pair of bits at corresponding positions. For example,11000110 AND10100011 =10000010 and this can be used to select part of a bitstring using a bit mask. For example,10011101 AND00001000 =00001000 extracts the fifth bit of an 8-bit bitstring
12.
Join and meet
–
In a partially ordered set P, the join and meet of a subset S are respectively the supremum of S, denoted ⋁S, and infimum of S, denoted ⋀S. In general, the join and meet of a subset of an ordered set need not exist. Join and meet can also be defined as a commutative, associative, if a and b are elements from P, the join is denoted as a ∨ b and the meet is denoted a ∧ b. Join and meet are symmetric duals with respect to order inversion, the join/meet of a subset of a totally ordered set is simply its maximal/minimal element. A partially ordered set in all pairs have a join is a join-semilattice. Dually, an ordered set in which all pairs have a meet is a meet-semilattice. A partially ordered set that is both a join-semilattice and a meet-semilattice is a lattice, a lattice in which every subset, not just every pair, possesses a meet and a join is a complete lattice. It is also possible to define a lattice, in which not all pairs have a meet or join. Let A be a set with a partial order ≤, an element z of A is the meet of x and y, if the following two conditions are satisfied, z ≤ x and z ≤ y. For any w in A, such that w ≤ x and w ≤ y, we have w ≤ z. If there is a meet of x and y, then it is unique, since if both z and z′ are greatest lower bounds of x and y, then z ≤ z′ and z′ ≤ z, if the meet does exist, it is denoted x ∧ y. Some pairs of elements in A may lack a meet, either since they have no lower bound at all, or since none of their lower bounds is greater than all the others. If all pairs of elements have meets, then the meet is an operation on A. By definition, a binary operation ∧ on a set A is a meet, the pair then is a meet-semilattice. Moreover, we then may define a binary relation ≤ on A, by stating that x ≤ y if, in fact, this relation is a partial order on A. Note that both meets and joins equally satisfy this definition, a couple of associated meet and join operations yield partial orders which are the reverse of each other. When choosing one of these orders as the ones, one also fixes which operation is considered a meet. Thus, the partial order defined by the meet in the universal algebra approach coincides with the partial order
13.
Exclusive or
–
Exclusive or or exclusive disjunction is a logical operation that outputs true only when inputs differ. It is symbolized by the prefix operator J and by the infix operators XOR, EOR, EXOR, ⊻, ⊕, ↮, the negation of XOR is logical biconditional, which outputs true only when both inputs are the same. It gains the exclusive or because the meaning of or is ambiguous when both operands are true, the exclusive or operator excludes that case. This is sometimes thought of as one or the other but not both and this could be written as A or B, but not, A and B. More generally, XOR is true only when an odd number of inputs are true, a chain of XORs—a XOR b XOR c XOR d —is true whenever an odd number of the inputs are true and is false whenever an even number of inputs are true. The truth table of A XOR B shows that it outputs true whenever the inputs differ,0, false 1, true Exclusive disjunction essentially means either one, in other words, the statement is true if and only if one is true and the other is false. For example, if two horses are racing, then one of the two win the race, but not both of them. The exclusive or is equivalent to the negation of a logical biconditional, by the rules of material implication. This unfortunately prevents the combination of two systems into larger structures, such as a mathematical ring. However, the system using exclusive or is an abelian group, the combination of operators ∧ and ⊕ over elements produce the well-known field F2. This field can represent any logic obtainable with the system and has the benefit of the arsenal of algebraic analysis tools for fields. The Oxford English Dictionary explains either, or as follows, The primary function of either, etc. is to emphasize the perfect indifference of the two things or courses. But a secondary function is to emphasize the mutual exclusiveness, = either of the two, but not both, the exclusive-or explicitly states one or the other, but not neither nor both. Following this kind of common-sense intuition about or, it is argued that in many natural languages, English included. The exclusive disjunction of a pair of propositions, is supposed to mean that p is true or q is true, but not both. For example, it might be argued that the intention of a statement like You may have coffee. Certainly under some circumstances a sentence like this example should be taken as forbidding the possibility of accepting both options. Even so, there is reason to suppose that this sort of sentence is not disjunctive at all
14.
Symmetric difference
–
In mathematics, the symmetric difference, also known as the disjunctive union, of two sets is the set of elements which are in either of the sets and not in their intersection. The symmetric difference of the sets A and B is commonly denoted by A △ B, or A ⊖ B, for example, the symmetric difference of the sets and is. The power set of any set becomes a Boolean ring with symmetric difference as the addition of the ring and intersection as the multiplication of the ring. Furthermore, if we denote D = A △ B and I = A ∩ B, then D and I are always disjoint, the symmetric difference is commutative and associative, A △ B = B △ A, △ C = A △. The empty set is neutral, and every set is its own inverse, taken together, we see that the power set of any set X becomes an abelian group if we use the symmetric difference as operation. A group in every element is its own inverse is sometimes called a Boolean group. Sometimes the Boolean group is defined as the symmetric difference operation on a set. In the case where X has only two elements, the thus obtained is the Klein four-group. Equivalently, a Boolean group is an Elementary abelian 2-group, consequently, the group induced by the symmetric difference is in fact a vector space over the field with 2 elements Z2. If X is finite, then the form a basis of this vector space. This construction is used in theory, to define the cycle space of a graph. In particular, △ = A △ C and this implies triangle inequality, the symmetric difference of A and C is contained in the union of the symmetric difference of A and B and that of B and C. Intersection distributes over symmetric difference, A ∩ = △, and this is the prototypical example of a Boolean ring. Further properties of the difference, A △ B = A c △ B c. △ ⊆ ⋃ α ∈ I, where I is an arbitrary non-empty index set, if f, S → T is any function and A, B ⊆ T are any sets in f s codomain, then f −1 = f −1 △ f −1. The symmetric difference can be defined in any Boolean algebra, by writing x △ y = ∧ ¬ = ∨ = x ⊕ y and this operation has the same properties as the symmetric difference of sets. The repeated symmetric difference is in an equivalent to an operation on a multiset of sets giving the set of elements which are in an odd number of sets. As above, the difference of a collection of sets contains just elements which are in an odd number of the sets in the collection
15.
Logical disjunction
–
In logic and mathematics, or is the truth-functional operator of disjunction, also known as alternation, the or of a set of operands is true if and only if one or more of its operands is true. The logical connective that represents this operator is written as ∨ or +. A or B is true if A is true, or if B is true, or if both A and B are true. In logic, or by means the inclusive or, distinguished from an exclusive or. An operand of a disjunction is called a disjunct, related concepts in other fields are, In natural language, the coordinating conjunction or. In programming languages, the short-circuit or control structure, or is usually expressed with an infix operator, in mathematics and logic, ∨, in electronics, +, and in most programming languages, |, ||, or or. In Jan Łukasiewiczs prefix notation for logic, the operator is A, logical disjunction is an operation on two logical values, typically the values of two propositions, that has a value of false if and only if both of its operands are false. More generally, a disjunction is a formula that can have one or more literals separated only by ors. A single literal is often considered to be a degenerate disjunction, the disjunctive identity is false, which is to say that the or of an expression with false has the same value as the original expression. In keeping with the concept of truth, when disjunction is defined as an operator or function of arbitrary arity. Falsehood-preserving, The interpretation under which all variables are assigned a value of false produces a truth value of false as a result of disjunction. The mathematical symbol for logical disjunction varies in the literature, in addition to the word or, and the formula Apq, the symbol ∨, deriving from the Latin word vel is commonly used for disjunction. For example, A ∨ B is read as A or B, such a disjunction is false if both A and B are false. In all other cases it is true, all of the following are disjunctions, A ∨ B ¬ A ∨ B A ∨ ¬ B ∨ ¬ C ∨ D ∨ ¬ E. The corresponding operation in set theory is the set-theoretic union, operators corresponding to logical disjunction exist in most programming languages. Disjunction is often used for bitwise operations, for example, x = x | 0b00000001 will force the final bit to 1 while leaving other bits unchanged. Logical disjunction is usually short-circuited, that is, if the first operand evaluates to true then the second operand is not evaluated, the logical disjunction operator thus usually constitutes a sequence point. In a parallel language, it is possible to both sides, they are evaluated in parallel, and if one terminates with value true
16.
George Boole
–
George Boole was an English mathematician, educator, philosopher and logician. He worked in the fields of differential equations and algebraic logic, Boolean logic is credited with laying the foundations for the information age. Boole was born in Lincoln, Lincolnshire, England, the son of John Boole Sr and he had a primary school education, and received lessons from his father, but had little further formal and academic teaching. William Brooke, a bookseller in Lincoln, may have helped him with Latin and he was self-taught in modern languages. At age 16 Boole became the breadwinner for his parents and three siblings, taking up a junior teaching position in Doncaster at Heighams School. Boole participated in the Mechanics Institute, in the Greyfriars, Lincoln, without a teacher, it took him many years to master calculus. At age 19, Boole successfully established his own school in Lincoln, four years later he took over Halls Academy in Waddington, outside Lincoln, following the death of Robert Hall. In 1840 he moved back to Lincoln, where he ran a boarding school, Boole became a prominent local figure, an admirer of John Kaye, the bishop. He took part in the campaign for early closing. With E. R. Larken and others he set up a society in 1847. He associated also with the Chartist Thomas Cooper, whose wife was a relation, from 1838 onwards Boole was making contacts with sympathetic British academic mathematicians and reading more widely. He studied algebra in the form of symbolic methods, as far as these were understood at the time, Booles status as mathematician was recognised by his appointment in 1849 as the first professor of mathematics at Queens College, Cork in Ireland. He met his wife, Mary Everest, there in 1850 while she was visiting her uncle John Ryall who was Professor of Greek. They married some years later in 1855 and he maintained his ties with Lincoln, working there with E. R. Larken in a campaign to reduce prostitution. Boole was awarded the Keith Medal by the Royal Society of Edinburgh in 1855 and was elected a Fellow of the Royal Society in 1857 and he received honorary degrees of LL. D. from the University of Dublin and the University of Oxford. In late November 1864, Boole walked, in rain, from his home at Lichfield Cottage in Ballintemple to the university. He soon became ill, developing a cold and high fever. As his wife believed that remedies should resemble their cause, she put her husband to bed and poured buckets of water over him – the wet having brought on his illness, Booles condition worsened and on 8 December 1864, he died of fever-induced pleural effusion
17.
Augustus De Morgan
–
Augustus De Morgan was a British mathematician and logician. He formulated De Morgans laws and introduced the mathematical induction. Augustus De Morgan was born in Madurai, India in 1806 and his father was Lieut. -Colonel John De Morgan, who held various appointments in the service of the East India Company. His mother, Elizabeth Dodson descended from James Dodson, who computed a table of anti-logarithms, that is, Augustus De Morgan became blind in one eye a month or two after he was born. The family moved to England when Augustus was seven months old, when De Morgan was ten years old, his father died. Mrs. De Morgan resided at various places in the southwest of England and his mathematical talents went unnoticed until he was fourteen, when a family-friend discovered him making an elaborate drawing of a figure in Euclid with ruler and compasses. She explained the aim of Euclid to Augustus, and gave him an initiation into demonstration and he received his secondary education from Mr. Parsons, a fellow of Oriel College, Oxford, who appreciated classics better than mathematics. His mother was an active and ardent member of the Church of England, and desired that her son should become a clergyman, I shall use the world Anti-Deism to signify the opinion that there does not exist a Creator who made and sustains the Universe. His college tutor was John Philips Higman, FRS, at college he played the flute for recreation and was prominent in the musical clubs. His love of knowledge for its own sake interfered with training for the great mathematical race, as a consequence he came out fourth wrangler. This entitled him to the degree of Bachelor of Arts, but to take the degree of Master of Arts. To the signing of any such test De Morgan felt a strong objection, in about 1875 theological tests for academic degrees were abolished in the Universities of Oxford and Cambridge. As no career was open to him at his own university, he decided to go to the Bar, and took up residence in London, about this time the movement for founding London University took shape. A body of liberal-minded men resolved to meet the difficulty by establishing in London a University on the principle of religious neutrality, De Morgan, then 22 years of age, was appointed professor of mathematics. His introductory lecture On the study of mathematics is a discourse upon mental education of permanent value, the London University was a new institution, and the relations of the Council of management, the Senate of professors and the body of students were not well defined. A dispute arose between the professor of anatomy and his students, and in consequence of the action taken by the Council, another professor of mathematics was appointed, who then drowned a few years later. De Morgan had shown himself a prince of teachers, he was invited to return to his chair and its object was to spread scientific and other knowledge by means of cheap and clearly written treatises by the best writers of the time. One of its most voluminous and effective writers was De Morgan, when De Morgan came to reside in London he found a congenial friend in William Frend, notwithstanding his mathematical heresy about negative quantities
18.
Sir William Hamilton, 9th Baronet
–
Sir William Hamilton, 9th Baronet FRSE DD FSAS was a Scottish metaphysician. He is often referred to as William Stirling Hamilton of Preston, in reference to his mother and he was born in rooms at the University of Glasgow He was from an academic family, his younger brother being Robert Hamilton, the economist. William Hamilton and a brother, Thomas Hamilton, were brought up by their mother. He obtained a first class in lit ens humanioribus and took his B. A. in 1811. He had been intended for the profession, but soon after leaving Oxford he gave up this idea. His life continued to be that of a student, and the years that followed were filled by researches of all kinds, while at the same time he was gradually forming his philosophic system. Two visits to Germany in 1817 and 1820 led to Williams taking up the study of German and later on that of contemporary German philosophy, which was almost entirely neglected in British universities. Soon afterwards he was appointed professor of history, and as such delivered several courses of lectures on the history of modern Europe. The salary was £100 a year, derived from a beer tax. No pupils were compelled to attend, the class dwindled, in January 1827 his mother, to whom he had been devoted, died. In March 1828 he married his cousin, Janet Marshall, around this time he moved to live in a recently built townhouse at 11 Manor Place, in Edinburghs west end. Much about the time he began the preparation of an annotated edition of Thomas Reids works. Before, however, this design had been carried out, he was struck with paralysis of the right side, the edition of Reid appeared in 1846, but with only seven of the intended dissertations, one unfinished. At his death he had not completed the work, notes on the subjects to be discussed were found among his manuscripts. But the elaboration of the scheme in its details and applications continued during the few years to occupy much of his leisure. Out of this arose a controversy with Augustus de Morgan. The essay did not appear, but the results of the labour gone through are contained in the appendices to his Lectures on Logic, Hamilton also prepared extensive materials for a publication which he designed on the personal history, influence and opinions of Martin Luther. Here he advanced so far as to have planned and partly carried out the arrangement of the work, but it did not go further, and still remains in manuscript
19.
William Stanley Jevons
–
William Stanley Jevons FRS was an English economist and logician. Irving Fisher described Jevons book A General Mathematical Theory of Political Economy as the start of the method in economics. It made the case that economics as a science concerned with quantities is necessarily mathematical, in so doing, it expounded upon the final utility theory of value. Jevons work, along with similar discoveries made by Carl Menger in Vienna and by Léon Walras in Switzerland, Jevons contribution to the marginal revolution in economics in the late 19th century established his reputation as a leading political economist and logician of the time. Jevons broke off his studies of the sciences in London in 1854 to work as an assayer in Sydney. Returning to the UK in 1859, he published General Mathematical Theory of Political Economy in 1862, outlining the marginal utility theory of value, and A Serious Fall in the Value of Gold in 1863. The most important of his works on logic and scientific methods is his Principles of Science, as well as The Theory of Political Economy, among his inventions was the logic piano, a mechanical computer. Jevons was born in Liverpool, Lancashire, England and his father, Thomas Jevons, a man of strong scientific tastes and a writer on legal and economic subjects, was an iron merchant. His mother Mary Anne Jevons was the daughter of William Roscoe, at the age of fifteen he was sent to London to attend the University College School. The idea of leaving the UK was distasteful, but pecuniary considerations had, in consequence of the failure of his fathers firm in 1847, become of vital importance, Jevons left the UK for Sydney in June 1854 to take up a role as an Assayer at the Mint. Jevons lived with his colleague and his wife first at Church Hill, then in Annangrove at Petersham, in letters to his family he described his life, took photographs and produced a social map of Sydney. Jevons returned to England via America five years later, not long after taking his M. A. degree Jevons obtained a post as tutor at Owens College, Manchester. In 1866 he was elected professor of logic and mental and moral philosophy, next year he married Harriet Ann Taylor, whose father, John Edward Taylor, had been the founder and proprietor of the Manchester Guardian. Jevons suffered a good deal from ill health and sleeplessness, in 1876 he was glad to exchange the Owens professorship for the professorship of political economy in University College, London. Travelling and music were the principal recreations of his life, but his continued to be bad. He found his professorial duties increasingly irksome, and feeling that the pressure of work left him no spare energy. On 13 August 1882 he drowned whilst bathing near Hastings, Jevons was a prolific writer, and at the time of his death was a leader in the UK both as a logician and as an economist. Alfred Marshall said of his work in economics that it will probably be found to have more constructive force than any, save that of Ricardo, Stanley Jevons was brought up a Christian Unitarian, and excerpts from his journals indicate he remained committed to his Christian beliefs until death
20.
Charles Sanders Peirce
–
Charles Sanders Peirce was an American philosopher, logician, mathematician, and scientist who is sometimes known as the father of pragmatism. He was educated as a chemist and employed as a scientist for 30 years, today he is appreciated largely for his contributions to logic, mathematics, philosophy, scientific methodology, and semiotics, and for his founding of pragmatism. An innovator in mathematics, statistics, philosophy, research methodology, and various sciences, Peirce considered himself, first and foremost and he made major contributions to logic, but logic for him encompassed much of that which is now called epistemology and philosophy of science. As early as 1886 he saw that logical operations could be carried out by electrical switching circuits, in 1934, the philosopher Paul Weiss called Peirce the most original and versatile of American philosophers and Americas greatest logician. Websters Biographical Dictionary said in 1943 that Peirce was now regarded as the most original thinker, keith Devlin similarly referred to Peirce as one of the greatest philosophers ever. Peirce was born at 3 Phillips Place in Cambridge, Massachusetts and he was the son of Sarah Hunt Mills and Benjamin Peirce, himself a professor of astronomy and mathematics at Harvard University and perhaps the first serious research mathematician in America. At age 12, Charles read his older brothers copy of Richard Whatelys Elements of Logic, so began his lifelong fascination with logic and reasoning. At Harvard, he began lifelong friendships with Francis Ellingwood Abbot, Chauncey Wright, one of his Harvard instructors, Charles William Eliot, formed an unfavorable opinion of Peirce. This opinion proved fateful, because Eliot, while President of Harvard 1869–1909—a period encompassing nearly all of Peirces working life—repeatedly vetoed Harvards employing Peirce in any capacity. Peirce suffered from his late teens onward from a condition then known as facial neuralgia. Its consequences may have led to the isolation which made his lifes later years so tragic. That employment exempted Peirce from having to part in the Civil War, it would have been very awkward for him to do so. At the Survey, he worked mainly in geodesy and gravimetry and he was elected a resident fellow of the American Academy of Arts and Sciences in January 1867. From 1869 to 1872, he was employed as an Assistant in Harvards astronomical observatory, doing important work on determining the brightness of stars, on April 20,1877 he was elected a member of the National Academy of Sciences. Also in 1877, he proposed measuring the meter as so many wavelengths of light of a certain frequency, during the 1880s, Peirces indifference to bureaucratic detail waxed while his Survey works quality and timeliness waned. Peirce took years to write reports that he should have completed in months, meanwhile, he wrote entries, ultimately thousands during 1883–1909, on philosophy, logic, science, and other subjects for the encyclopedic Century Dictionary. In 1885, an investigation by the Allison Commission exonerated Peirce, in 1891, Peirce resigned from the Coast Survey at Superintendent Thomas Corwin Mendenhalls request. He never again held regular employment, in 1879, Peirce was appointed Lecturer in logic at Johns Hopkins University, which had strong departments in a number of areas that interested him, such as philosophy, psychology, and mathematics
21.
Alfred North Whitehead
–
Alfred North Whitehead OM FRS was an English mathematician and philosopher. In his early career Whitehead wrote primarily on mathematics, logic and his most notable work in these fields is the three-volume Principia Mathematica, which he wrote with former student Bertrand Russell. Beginning in the late 1910s and early 1920s, Whitehead gradually turned his attention from mathematics to philosophy of science and he developed a comprehensive metaphysical system which radically departed from most of western philosophy. Today Whiteheads philosophical works – particularly Process and Reality – are regarded as the texts of process philosophy. For this reason, one of the most promising applications of Whiteheads thought in recent years has been in the area of ecological civilization, cobb, Jr. Alfred North Whitehead was born in Ramsgate, Kent, England, in 1861. His father, Alfred Whitehead, was a minister and schoolmaster of Chatham House Academy, Whitehead himself recalled both of them as being very successful schoolmasters, but that his grandfather was the more extraordinary man. Whiteheads mother was Maria Sarah Whitehead, formerly Maria Sarah Buckmaster, Whitehead was apparently not particularly close with his mother, as he never mentioned her in any of his writings, and there is evidence that Whiteheads wife, Evelyn, had a low opinion of her. Whitehead was educated at Sherborne School, Dorset, then considered one of the best public schools in the country and his childhood was described as over-protected, but when at school he excelled in sports and mathematics and was head prefect of his class. In 1880, Whitehead began attending Trinity College, Cambridge, and his academic advisor was Edward John Routh. He earned his BA from Trinity in 1884, and graduated as fourth wrangler, in 1890, Whitehead married Evelyn Wade, an Irish woman raised in France, they had a daughter, Jessie Whitehead, and two sons, Thomas North Whitehead and Eric Whitehead. Eric Whitehead died in action serving in the Royal Flying Corps during World War I at the age of 19. In 1910, Whitehead resigned his Senior Lectureship in Mathematics at Trinity, toward the end of his time in England, Whitehead turned his attention to philosophy. Though he had no advanced training in philosophy, his work soon became highly regarded. After publishing The Concept of Nature in 1920, he served as president of the Aristotelian Society from 1922 to 1923, in 1924, Henry Osborn Taylor invited the 63-year-old Whitehead to join the faculty at Harvard University as a professor of philosophy. During his time at Harvard, Whitehead produced his most important philosophical contributions, in 1925, he wrote Science and the Modern World, which was immediately hailed as an alternative to the Cartesian dualism that plagued popular science. A few years later, he published his seminal work Process and Reality, the Whiteheads spent the rest of their lives in the United States. Alfred North retired from Harvard in 1937 and remained in Cambridge, the two volume biography of Whitehead by Victor Lowe is the most definitive presentation of the life of Whitehead. However, many details of Whiteheads life remain obscure because he left no Nachlass, additionally, Whitehead was known for his almost fanatical belief in the right to privacy, and for writing very few personal letters of the kind that would help to gain insight on his life
22.
Garrett Birkhoff
–
Garrett Birkhoff was an American mathematician. He is best known for his work in lattice theory, the mathematician George Birkhoff was his father. The son of the mathematician George David Birkhoff, Garrett was born in Princeton and he began the Harvard University BA course in 1928 after less than seven years of prior formal education. Upon completing his Harvard BA in 1932, he went to Cambridge University in England to study mathematical physics, while visiting the University of Munich, he met Carathéodory who pointed him towards two important texts, Van der Waerden on abstract algebra and Speiser on group theory. From these facts can be inferred the number and quality of Birkhoffs papers published by his 25th year, during the 1930s, Birkhoff, along with his Harvard colleagues Marshall Stone and Saunders Mac Lane, substantially advanced American teaching and research in abstract algebra. In 1941 he and Mac Lane published A Survey of Modern Algebra, Mac Lane and Birkhoffs Algebra is a more advanced text on abstract algebra. A number of papers he wrote in the 1930s, culminating in his monograph, Lattice Theory and his 1935 paper, On the Structure of Abstract Algebras founded a new branch of mathematics, universal algebra. During and after World War II, Birkhoffs interests gravitated towards what he called engineering mathematics, during the war, he worked on radar aiming and ballistics, including the bazooka. In the development of weapons, mathematical questions arose, some of which had not yet been addressed by the literature on fluid dynamics, Birkhoffs research was presented in his texts on fluid dynamics, Hydrodynamics and Jets, Wakes and Cavities. Birkhoff, a friend of John von Neumann, took a close interest in the rise of the electronic computer. Birkhoff supervised the Ph. D. thesis of David M. Young on the solution of the partial differential equation of Poisson. Extending the results of Young, the Birkhoff-Varga collaboration led to publications on positive operators. Birkhoffs research and consulting work developed computational methods besides numerical linear algebra, Birkhoff published more than 200 papers and supervised more than 50 Ph. D. s. He was a member of the National Academy of Sciences and the American Academy of Arts and he was a Guggenheim Fellow for the academic year 1948–1949 and the president of the Society for Industrial and Applied Mathematics for 1966–1968. He won a Lester R. Ford Award in 1974, Birkhoff, Garrett, Lattice theory, American Mathematical Society Colloquium Publications,25, Providence, R. I. American Mathematical Society, ISBN 978-0-8218-1025-5, MR598630 ——, Mac Lane, Saunders, A Survey of Modern Algebra, peters, ISBN 1-56881-068-7 ——, Hydrodynamics, A study in logic, fact, and similitude, Greenwood Press ——, Zarantonello, E. H. Robertson, Edmund F. Garrett Birkhoff, MacTutor History of Mathematics archive, University of St Andrews
23.
Dana Scott
–
His research career involved computer science, mathematics, and philosophy. He has worked also on modal logic, topology, and category theory and he received his BA in Mathematics from the University of California, Berkeley, in 1954. He wrote his Ph. D. thesis on Convergent Sequences of Complete Theories under the supervision of Alonzo Church while at Princeton, solomon Feferman writes of this period, Scott began his studies in logic at Berkeley in the early 50s while still an undergraduate. Scott was clearly in line to do a Ph. D. with Tarski, upset by that, Scott left for Princeton where he finished with a Ph. D. under Alonzo Church. But it was not long before the relationship between them was mended to the point that Tarski could say to him, I hope I can call you my student. After completing his Ph. D. studies, he moved to the University of Chicago and this work led to the joint bestowal of the Turing Award on the two, for the introduction of this fundamental concept of computational complexity theory. During this period he started supervising Ph. D. students, such as James Halpern, Scott also began working on modal logic in this period, beginning a collaboration with John Lemmon, who moved to Claremont, California, in 1963. Later, Scott and Montague independently discovered an important generalisation of Kripke semantics for modal and tense logic, John Lemmon and Scott began work on a modal-logic textbook that was interrupted by Lemmons death in 1966. Scott eventually published the work as An Introduction to Modal Logic, following an initial observation of Robert Solovay, Scott formulated the concept of Boolean-valued model, as Solovay and Petr Vopěnka did likewise at around the same time. This work led to the award of the Leroy P. Steele Prize in 1972, Scott took up a post as Professor of Mathematical Logic on the Philosophy faculty of Oxford University in 1972. He was member of Merton College while at Oxford and is now an Honorary Fellow of the college, one of Scotts contributions is his formulation of domain theory, allowing programs involving recursive functions and looping-control constructs to be given denotational semantics. Additionally, he provided a foundation for the understanding of infinitary and continuous information through domain theory, the 2007 EATCS Award for his contribution to theoretical computer science. In 1994, he was inducted as a Fellow of the Association for Computing Machinery, in 2012 he became a fellow of the American Mathematical Society. Finite Automata and Their Decision Problem, a proof of the independence of the continuum hypothesis. In Philosophical Problems in Logic, ed. K. Lambert, gierz, G. Hofmann, K. H. Keimel, K. Lawson, J. D. Mislove, M. W. Scott, D. S. Encyclopedia of Mathematics and its Applications, Scotts trick Scott–Potter set theory Blackburn, de Rijke and Venema. In the Stanford Encyclopedia of Philosophy, solomon Feferman and Anita Burdman Feferman. Cambridge University Press, ISBN 0-521-80240-7, ISBN 978-0-521-80240-6, denotational Semantics, The Scott-Strachey Approach to Programming Language Theory
24.
Associative property
–
In mathematics, the associative property is a property of some binary operations. In propositional logic, associativity is a rule of replacement for expressions in logical proofs. That is, rearranging the parentheses in such an expression will not change its value, consider the following equations, +4 =2 + =92 × = ×4 =24. Even though the parentheses were rearranged on each line, the values of the expressions were not altered, since this holds true when performing addition and multiplication on any real numbers, it can be said that addition and multiplication of real numbers are associative operations. Associativity is not to be confused with commutativity, which addresses whether or not the order of two operands changes the result. For example, the order doesnt matter in the multiplication of numbers, that is. Associative operations are abundant in mathematics, in fact, many algebraic structures explicitly require their binary operations to be associative, however, many important and interesting operations are non-associative, some examples include subtraction, exponentiation and the vector cross product. Z = x = xyz for all x, y, z in S, the associative law can also be expressed in functional notation thus, f = f. If a binary operation is associative, repeated application of the produces the same result regardless how valid pairs of parenthesis are inserted in the expression. This is called the generalized associative law, thus the product can be written unambiguously as abcd. As the number of elements increases, the number of ways to insert parentheses grows quickly. Some examples of associative operations include the following, the two methods produce the same result, string concatenation is associative. In arithmetic, addition and multiplication of numbers are associative, i. e. + z = x + = x + y + z z = x = x y z } for all x, y, z ∈ R. x, y, z\in \mathbb. }Because of associativity. Addition and multiplication of numbers and quaternions are associative. Addition of octonions is also associative, but multiplication of octonions is non-associative, the greatest common divisor and least common multiple functions act associatively. Gcd = gcd = gcd lcm = lcm = lcm } for all x, y, z ∈ Z. x, y, z\in \mathbb. }Taking the intersection or the union of sets, ∩ C = A ∩ = A ∩ B ∩ C ∪ C = A ∪ = A ∪ B ∪ C } for all sets A, B, C. Slightly more generally, given four sets M, N, P and Q, with h, M to N, g, N to P, in short, composition of maps is always associative. Consider a set with three elements, A, B, and C, thus, for example, A=C = A
25.
Commutative property
–
In mathematics, a binary operation is commutative if changing the order of the operands does not change the result. It is a property of many binary operations, and many mathematical proofs depend on it. Most familiar as the name of the property that says 3 +4 =4 +3 or 2 ×5 =5 ×2, the property can also be used in more advanced settings. The name is needed there are operations, such as division and subtraction. The commutative property is a property associated with binary operations and functions. If the commutative property holds for a pair of elements under a binary operation then the two elements are said to commute under that operation. The term commutative is used in several related senses, putting on socks resembles a commutative operation since which sock is put on first is unimportant. Either way, the result, is the same, in contrast, putting on underwear and trousers is not commutative. The commutativity of addition is observed when paying for an item with cash, regardless of the order the bills are handed over in, they always give the same total. The multiplication of numbers is commutative, since y z = z y for all y, z ∈ R For example,3 ×5 =5 ×3. Some binary truth functions are also commutative, since the tables for the functions are the same when one changes the order of the operands. For example, the logical biconditional function p ↔ q is equivalent to q ↔ p and this function is also written as p IFF q, or as p ≡ q, or as Epq. Further examples of binary operations include addition and multiplication of complex numbers, addition and scalar multiplication of vectors. Concatenation, the act of joining character strings together, is a noncommutative operation, rotating a book 90° around a vertical axis then 90° around a horizontal axis produces a different orientation than when the rotations are performed in the opposite order. The twists of the Rubiks Cube are noncommutative and this can be studied using group theory. Some non-commutative binary operations, Records of the use of the commutative property go back to ancient times. The Egyptians used the property of multiplication to simplify computing products. Euclid is known to have assumed the property of multiplication in his book Elements
26.
Partially ordered set
–
In mathematics, especially order theory, a partially ordered set formalizes and generalizes the intuitive concept of an ordering, sequencing, or arrangement of the elements of a set. A poset consists of a set together with a binary relation indicating that, for pairs of elements in the set. The word partial in the partial order or partially ordered set is used as an indication that not every pair of elements need be comparable. That is, there may be pairs of elements for which neither element precedes the other in the poset, Partial orders thus generalize total orders, in which every pair is comparable. To be an order, a binary relation must be reflexive, antisymmetric. One familiar example of an ordered set is a collection of people ordered by genealogical descendancy. Some pairs of people bear the descendant-ancestor relationship, but other pairs of people are incomparable, a poset can be visualized through its Hasse diagram, which depicts the ordering relation. A partial order is a binary relation ≤ over a set P satisfying particular axioms which are discussed below, when a ≤ b, we say that a is related to b. The axioms for a partial order state that the relation ≤ is reflexive, antisymmetric. That is, for all a, b, and c in P, it must satisfy, in other words, a partial order is an antisymmetric preorder. A set with an order is called a partially ordered set. The term ordered set is also used, as long as it is clear from the context that no other kind of order is meant. In particular, totally ordered sets can also be referred to as ordered sets, for a, b, elements of a partially ordered set P, if a ≤ b or b ≤ a, then a and b are comparable. In the figure on top-right, e. g. and are comparable, while and are not, a partial order under which every pair of elements is comparable is called a total order or linear order, a totally ordered set is also called a chain. A subset of a poset in which no two elements are comparable is called an antichain. A more concise definition will be given using the strict order corresponding to ≤. For example, is covered by in the figure. Standard examples of posets arising in mathematics include, The real numbers ordered by the standard less-than-or-equal relation ≤, the set of subsets of a given set ordered by inclusion
27.
Infimum and supremum
–
In mathematics, the infimum of a subset S of a partially ordered set T is the greatest element in T that is less than or equal to all elements of S, if such an element exists. Consequently, the term greatest lower bound is also commonly used, the supremum of a subset S of a partially ordered set T is the least element in T that is greater than or equal to all elements of S, if such an element exists. Consequently, the supremum is also referred to as the least upper bound, the infimum is in a precise sense dual to the concept of a supremum. Infima and suprema of real numbers are special cases that are important in analysis. However, the general definitions remain valid in the abstract setting of order theory where arbitrary partially ordered sets are considered. The concepts of infimum and supremum are similar to minimum and maximum, for instance, the positive real numbers ℝ+* does not have a minimum, because any given element of ℝ+* could simply be divided in half resulting in a smaller number that is still in ℝ+*. There is, however, exactly one infimum of the real numbers,0. A lower bound of a subset S of an ordered set is an element a of P such that a ≤ x for all x in S. A lower bound a of S is called an infimum of S if for all lower bounds y of S in P, y ≤ a. Similarly, a bound of a subset S of a partially ordered set is an element b of P such that b ≥ x for all x in S. An upper bound b of S is called a supremum of S if for all upper bounds z of S in P, z ≥ b, infima and suprema do not necessarily exist. Existence of an infimum of a subset S of P can fail if S has no lower bound at all, however, if an infimum or supremum does exist, it is unique. Consequently, partially ordered sets for which certain infima are known to exist become especially interesting, more information on the various classes of partially ordered sets that arise from such considerations are found in the article on completeness properties. If the supremum of a subset S exists, it is unique, if S contains a greatest element, then that element is the supremum, otherwise, the supremum does not belong to S. Likewise, if the infimum exists, it is unique. If S contains a least element, then that element is the infimum, otherwise, the infimum of a subset S of a partially ordered set P, assuming it exists, does not necessarily belong to S. If it does, it is a minimal or least element of S. Similarly, if the supremum of S belongs to S, for example, consider the set of negative real numbers. This set has no greatest element, since for every element of the set, there is another, larger, for instance, for any negative real number x, there is another negative real number x 2, which is greater. On the other hand, every real number greater than or equal to zero is certainly an upper bound on this set, hence,0 is the least upper bound of the negative reals, so the supremum is 0
28.
Lattice (order)
–
A lattice is an abstract structure studied in the mathematical subdisciplines of order theory and abstract algebra. It consists of an ordered set in which every two elements have a unique supremum and a unique infimum. An example is given by the numbers, partially ordered by divisibility, for which the unique supremum is the least common multiple. Lattices can also be characterized as algebraic structures satisfying certain axiomatic identities, since the two definitions are equivalent, lattice theory draws on both order theory and universal algebra. Semilattices include lattices, which in turn include Heyting and Boolean algebras and these lattice-like structures all admit order-theoretic as well as algebraic descriptions. If is an ordered set, and S ⊆ L is an arbitrary subset. A set may have many upper bounds, or none at all, an upper bound u of S is said to be its least upper bound, or join, or supremum, if u ≤ x for each upper bound x of S. A set need not have a least upper bound, but it cannot have more than one, dually, l ∈ L is said to be a lower bound of S if l ≤ s for each s ∈ S. A lower bound l of S is said to be its greatest lower bound, or meet, or infimum, a set may have many lower bounds, or none at all, but can have at most one greatest lower bound. A partially ordered set is called a join-semilattice and a meet-semilattice if each two-element subset ⊆ L has a join and a meet, denoted by a ∨ b, is called a lattice if it is both a join- and a meet-semilattice. This definition makes ∨ and ∧ binary operations, both operations are monotone with respect to the order, a1 ≤ a2 and b1 ≤ b2 implies that a1 ∨ b1 ≤ a2 ∨ b2 and a1 ∧ b1 ≤ a2 ∧ b2. It follows by an argument that every non-empty finite subset of a lattice has a least upper bound. With additional assumptions, further conclusions may be possible, see Completeness for more discussion of this subject, a bounded lattice is a lattice that additionally has a greatest element 1 and a least element 0, which satisfy 0 ≤ x ≤1 for every x in L. The greatest and least element is called the maximum and minimum, or the top and bottom element. A partially ordered set is a lattice if and only if every finite set of elements has a join. Taking B to be the empty set, ⋁ = ∨ = ∨0 = ⋁ A and ⋀ = ∧ = ∧1 = ⋀ A which is consistent with the fact that A ∪ ∅ = A. A lattice element y is said to another element x, if y > x. Here, y > x means x ≤ y and x ≠ y