1.
George Boole
–
George Boole was an English mathematician, educator, philosopher and logician. He worked in the fields of differential equations and algebraic logic, Boolean logic is credited with laying the foundations for the information age. Boole was born in Lincoln, Lincolnshire, England, the son of John Boole Sr and he had a primary school education, and received lessons from his father, but had little further formal and academic teaching. William Brooke, a bookseller in Lincoln, may have helped him with Latin and he was self-taught in modern languages. At age 16 Boole became the breadwinner for his parents and three siblings, taking up a junior teaching position in Doncaster at Heighams School. Boole participated in the Mechanics Institute, in the Greyfriars, Lincoln, without a teacher, it took him many years to master calculus. At age 19, Boole successfully established his own school in Lincoln, four years later he took over Halls Academy in Waddington, outside Lincoln, following the death of Robert Hall. In 1840 he moved back to Lincoln, where he ran a boarding school, Boole became a prominent local figure, an admirer of John Kaye, the bishop. He took part in the campaign for early closing. With E. R. Larken and others he set up a society in 1847. He associated also with the Chartist Thomas Cooper, whose wife was a relation, from 1838 onwards Boole was making contacts with sympathetic British academic mathematicians and reading more widely. He studied algebra in the form of symbolic methods, as far as these were understood at the time, Booles status as mathematician was recognised by his appointment in 1849 as the first professor of mathematics at Queens College, Cork in Ireland. He met his wife, Mary Everest, there in 1850 while she was visiting her uncle John Ryall who was Professor of Greek. They married some years later in 1855 and he maintained his ties with Lincoln, working there with E. R. Larken in a campaign to reduce prostitution. Boole was awarded the Keith Medal by the Royal Society of Edinburgh in 1855 and was elected a Fellow of the Royal Society in 1857 and he received honorary degrees of LL. D. from the University of Dublin and the University of Oxford. In late November 1864, Boole walked, in rain, from his home at Lichfield Cottage in Ballintemple to the university. He soon became ill, developing a cold and high fever. As his wife believed that remedies should resemble their cause, she put her husband to bed and poured buckets of water over him – the wet having brought on his illness, Booles condition worsened and on 8 December 1864, he died of fever-induced pleural effusion

2.
Logic alphabet
–
The logic alphabet, also called the X-stem Logic Alphabet, constitutes an iconic set of symbols that systematically represents the sixteen possible binary truth functions of logic. The logic alphabet was developed by Shea Zellweger, the major emphasis of his iconic logic alphabet is to provide a more cognitively ergonomic notation for logic. Truth functions are functions from sequences of values to truth values. A unary truth function, for example, takes a single truth value, similarly, a binary truth function maps ordered pairs of truth values onto truth values, while a ternary truth function maps ordered triples of truth values onto truth values, and so on. In the unary case, there are two inputs, viz. In the form of a table, the four unary truth functions may be represented as follows, in the binary case, there are four possible inputs, viz. and, thus yielding sixteen possible binary truth functions. Quite generally, for any n, there are 22 n possible n-ary truth functions. The sixteen possible truth functions are listed in the table below. Dr. Zellwegers logic alphabet offers a systematic way of representing each of the sixteen binary truth functions. Letter shapes are derived from the distribution of Ts in the matrix, when drawing a logic symbol, one passes through each square with assigned F values while stopping in a square with assigned T values. In the extreme examples, the symbol for tautology is a X, the square matrix corresponding to each binary truth function, as well as its corresponding letter shape, are displayed in the table below. The interest of the logic alphabet lies in its aesthetic, symmetric and these qualities combine to allow an individual to more easily, rapidly and visually manipulate the relationships between entire truth tables. A logic operation performed on a two dimensional logic alphabet connective, with its geometric qualities, produces a symmetry transformation, when a symmetry transformation occurs, each input symbol, without any further thought, immediately changes into the correct output symbol. Similar symmetry transformations can be obtained by operating upon the other symbols, in effect, the X-stem Logic Alphabet is derived from three disciplines that have been stacked and combined, mathematics, logic, and semiotics. Logic is sandwiched between mathematics and semiotics, indeed, Zellweger has constructed intriguing structures involving the symbols of the logic alphabet on the basis of these symmetries. The considerable aesthetic appeal of the alphabet has led to exhibitions of Zellwegers work at the Museum of Jurassic Technology in Los Angeles. The value of the logic alphabet lies in its use as a visually simpler pedagogical tool than the system for logic notation. The logic alphabet eases the introduction to the fundamentals of logic, especially for children, various subsets of the sixteen binary connectives are themselves functionally complete in that they suffice to define the remaining connectives

3.
Free Boolean algebra
–
The generators of a free Boolean algebra can represent independent propositions. Consider, for example, the propositions John is tall and Mary is rich, other elements of the Boolean algebra are then logical disjunctions of the atoms, such as John is tall and Mary is not rich, or John is not tall and Mary is rich. In addition there is one element, FALSE, which can be thought of as the empty disjunction, that is. This example yields a Boolean algebra with 16 elements, in general, for n, the free Boolean algebra with n generators has 2n atoms. If there are infinitely many generators, a similar situation prevails except that now there are no atoms, each element of the Boolean algebra is a combination of finitely many of the generating propositions, with two such elements deemed identical if they are logically equivalent. In fact, this generalizes to any algebraic structure definable in the framework of universal algebra. Above, we said that a free Boolean algebra is a Boolean algebra with a set of generators that behave a certain way, alternatively, one start with a set. Every set X generates a free Boolean algebra FX defined as the algebra such that for every algebra B and function f, X → B, there is a unique Boolean algebra homomorphism f′, diagrammatically, where iX is the inclusion, and the dashed arrow denotes uniqueness. The idea is that one chooses where to send the elements of X. If FX contained elements inexpressible as combinations of elements of X, then f′ wouldnt be unique and it is easily shown that FX is unique, so this definition makes sense. It is also shown that a free Boolean algebra with generating set X, as defined originally, is isomorphic to FX. One shortcoming of the definition is that the diagram doesnt capture that f′ is a homomorphism. We can fix this by separating it into two diagrams, one in BA and one in Set, to relate the two, we introduce a functor U, BA → Set that forgets the algebraic structure, mapping algebras and homomorphisms to their underlying sets and functions. The functor U can be thought of as a device to pull the homomorphism f′ back into Set so it can be related to f, the remarkable aspect of this is that the latter diagram is one of the various definitions of when two functors are adjoint. Our F easily extends to a functor Set → BA, for each α<κ, the αth generator is the set of all elements of κ whose αth coordinate is 1. In particular, the free Boolean algebra with ℵ0 generators is the collection of all subsets of a Cantor space. For more on this approach to free Boolean algebra, see Stones representation theorem for Boolean algebras. Boolean algebra Generating set Steve Awodey Category Theory, paul Halmos and Steven Givant Logic as Algebra

4.
Binary decision diagram
–
In computer science, a binary decision diagram or branching program is a data structure that is used to represent a Boolean function. On a more level, BDDs can be considered as a compressed representation of sets or relations. Unlike other compressed representations, operations are performed directly on the compressed representation, other data structures used to represent a Boolean function include negation normal form, and propositional directed acyclic graph. A Boolean function can be represented as a rooted, directed, acyclic graph, there are two types of terminal nodes called 0-terminal and 1-terminal. Each decision node N is labeled by Boolean variable V N and has two child nodes called low child and high child, the edge from node V N to a low child represents an assignment of V N to 0. Such a BDD is called ordered if different variables appear in the order on all paths from the root. A BDD is said to be reduced if the two rules have been applied to its graph, Merge any isomorphic subgraphs. Eliminate any node whose two children are isomorphic, in popular usage, the term BDD almost always refers to Reduced Ordered Binary Decision Diagram. The advantage of an ROBDD is that it is canonical for a particular function and this property makes it useful in functional equivalence checking and other operations like functional technology mapping. A path from the node to the 1-terminal represents a variable assignment for which the represented Boolean function is true. As the path descends to a low child from a node, the left figure below shows a binary decision tree, and a truth table, each representing the function f. In the tree on the left, the value of the function can be determined for a variable assignment by following a path down the graph to a terminal. In the figures below, dotted lines represent edges to a low child, therefore, to find, begin at x1, traverse down the dotted line to x2, then down two solid lines. This leads to the terminal 1, which is the value of f, the binary decision tree of the left figure can be transformed into a binary decision diagram by maximally reducing it according to the two reduction rules. The resulting BDD is shown in the right figure, the basic idea from which the data structure was created is the Shannon expansion. A switching function is split into two sub-functions by assigning one variable, if such a sub-function is considered as a sub-tree, it can be represented by a binary decision tree. Binary decision diagrams were introduced by Lee, and further studied and made known by Akers, applying these two concepts results in an efficient data structure and algorithms for the representation of sets and relations. By extending the sharing to several BDDs, i. e. one sub-graph is used by several BDDs, the notion of a BDD is now generally used to refer to that particular data structure

5.
Boolean satisfiability problem
–
In computer science, the Boolean Satisfiability Problem is the problem of determining if there exists an interpretation that satisfies a given Boolean formula. In other words, it asks whether the variables of a given Boolean formula can be replaced by the values TRUE or FALSE in such a way that the formula evaluates to TRUE. If this is the case, the formula is called satisfiable, on the other hand, if no such assignment exists, the function expressed by the formula is FALSE for all possible variable assignments and the formula is unsatisfiable. For example, the formula a AND NOT b is satisfiable because one can find the values a = TRUE and b = FALSE, in contrast, a AND NOT a is unsatisfiable. SAT is one of the first problems that was proven to be NP-complete and this means that all problems in the complexity class NP, which includes a wide range of natural decision and optimization problems, are at most as difficult to solve as SAT. g. Artificial intelligence, circuit design, and automatic theorem proving, a propositional logic formula, also called Boolean expression, is built from variables, operators AND, OR, NOT, and parentheses. A formula is said to be if it can be made TRUE by assigning appropriate logical values to its variables. The Boolean satisfiability problem is, given a formula, to whether it is satisfiable. This decision problem is of importance in various areas of computer science, including theoretical computer science, complexity theory, algorithmics, cryptography. There are several cases of the Boolean satisfiability problem in which the formulas are required to have a particular structure. A literal is either a variable, then called positive literal, or the negation of a variable, a clause is a disjunction of literals. A clause is called a Horn clause if it contains at most one positive literal, a formula is in conjunctive normal form if it is a conjunction of clauses. The formula is satisfiable, choosing x1 = FALSE, x2 = FALSE, and x3 arbitrarily, since ∧ ∧ ¬FALSE evaluates to ∧ ∧ TRUE, and in turn to TRUE ∧ TRUE ∧ TRUE. In contrast, the CNF formula a ∧ ¬a, consisting of two clauses of one literal, is unsatisfiable, since for a=TRUE and a=FALSE it evaluates to TRUE ∧ ¬TRUE and FALSE ∧ ¬FALSE, different sets of allowed boolean operators lead to different problem versions. As an example, R is a clause, and R ∧ R ∧ R is a generalized conjunctive normal form. This formula is used below, with R being the operator that is TRUE just if exactly one of its arguments is. Using the laws of Boolean algebra, every propositional logic formula can be transformed into an equivalent conjunctive normal form, for example, transforming the formula ∨ ∨. ∨ into conjunctive normal form yields ∧ ∧ ∧ ∧, ∧ ∧ ∧ ∧, while the former is a disjunction of n conjunctions of 2 variables, the latter consists of 2n clauses of n variables

6.
Bitwise operation
–
In digital computer programming, a bitwise operation operates on one or more bit patterns or binary numerals at the level of their individual bits. It is a fast, simple action directly supported by the processor, on simple low-cost processors, typically, bitwise operations are substantially faster than division, several times faster than multiplication, and sometimes significantly faster than addition. In the explanations below, any indication of a position is counted from the right side. For example, the binary value 0001 has zeroes at every position, the bitwise NOT, or complement, is a unary operation that performs logical negation on each bit, forming the ones complement of the given binary value. Bits that are 0 become 1, and those that are 1 become 0, for example, NOT0111 =1000 NOT10101011 =01010100 The bitwise complement is equal to the twos complement of the value minus one. If twos complement arithmetic is used, then NOT x = -x −1, for unsigned integers, the bitwise complement of a number is the mirror reflection of the number across the half-way point of the unsigned integers range. A simple but illustrative use is to invert a grayscale image where each pixel is stored as an unsigned integer. A bitwise AND takes two equal-length binary representations and performs the logical AND operation on each pair of the corresponding bits, thus, if both bits in the compared position are 1, the bit in the resulting binary representation is 1, otherwise, the result is 0. For example,0101 AND0011 =0001 The operation may be used to determine whether a bit is set or clear. This is often called bit masking, the bitwise AND may be used to clear selected bits of a register in which each bit represents an individual Boolean state. This technique is an efficient way to store a number of Boolean values using as little memory as possible, for example,0110 can be considered a set of four flags, where the first and fourth flags are clear, and the second and third flags are set. Using the example above,0110 AND0001 =0000 Because 6 AND1 is zero,6 is divisible by two and therefore even, a bitwise OR takes two bit patterns of equal length and performs the logical inclusive OR operation on each pair of corresponding bits. The result in each position is 0 if both bits are 0, while otherwise the result is 1, for example,0101 OR0011 =0111 The bitwise OR may be used to set to 1 the selected bits of the register described above. The result in each position is 1 if only the first bit is 1 or only the bit is 1. In this we perform the comparison of two bits, being 1 if the two bits are different, and 0 if they are the same, for example,0101 XOR0011 =0110 The bitwise XOR may be used to invert selected bits in a register. Any bit may be toggled by XORing it with 1, assembly language programmers sometimes use XOR as a short-cut to setting the value of a register to zero. Performing XOR on a value against itself always yields zero, and on many architectures this operation requires fewer clock cycles and memory than loading a zero value, in these operations the digits are moved, or shifted, to the left or right. In an arithmetic shift, the bits that are shifted out of either end are discarded

7.
Karnaugh map
–
The Karnaugh map, also known as the K-map, is a method to simplify boolean algebra expressions. Maurice Karnaugh introduced it in 1953 as a refinement of Edward Veitchs 1952 Veitch diagram, the Karnaugh map reduces the need for extensive calculations by taking advantage of humans pattern-recognition capability. It also permits the identification and elimination of potential race conditions. Optimal groups of 1s or 0s are identified, which represent the terms of a form of the logic in the original truth table. These terms can be used to write a minimal boolean expression representing the required logic, Karnaugh maps are used to simplify real-world logic requirements so that they can be implemented using a minimum number of physical logic gates. A sum-of-products expression can always be implemented using AND gates feeding into an OR gate, Karnaugh maps can also be used to simplify logic expressions in software design. Boolean conditions, as used for example in conditional statements, can get very complicated, once minimised, canonical sum-of-products and product-of-sums expressions can be implemented directly using AND and OR logic operators. Karnaugh maps are used to facilitate the simplification of Boolean algebra functions, for example, consider the Boolean function described by the following truth table. Following are two different notations describing the function in unsimplified Boolean algebra, using the Boolean variables A, B, C, D. F = ∑ m i, i ∈ where m i are the minterms to map, F = ∏ M i, i ∈ where M i are the maxterms to map. In the example above, the four input variables can be combined in 16 different ways, so the table has 16 rows. The Karnaugh map is arranged in a 4 ×4 grid. The row and column indices are ordered in Gray code rather than binary numerical order, Gray code ensures that only one variable changes between each pair of adjacent cells. Each cell of the completed Karnaugh map contains a binary digit representing the output for that combination of inputs. After the Karnaugh map has been constructed, it is used to one of the simplest possible forms — a canonical form — for the information in the truth table. Adjacent 1s in the Karnaugh map represent opportunities to simplify the expression, the minterms for the final expression are found by encircling groups of 1s in the map. Minterm groups must be rectangular and must have an area that is a power of two, minterm rectangles should be as large as possible without containing any 0s. Groups may overlap in order to each one larger

8.
Boolean algebra (structure)
–
In abstract algebra, a Boolean algebra or Boolean lattice is a complemented distributive lattice. This type of algebraic structure captures essential properties of set operations and logic operations. A Boolean algebra can be seen as a generalization of a power set algebra or a field of sets and it is also a special case of a De Morgan algebra and a Kleene algebra. The term Boolean algebra honors George Boole, a self-educated English mathematician, booles formulation differs from that described above in some important respects. For example, conjunction and disjunction in Boole were not a pair of operations. Boolean algebra emerged in the 1860s, in papers written by William Jevons, the first systematic presentation of Boolean algebra and distributive lattices is owed to the 1890 Vorlesungen of Ernst Schröder. The first extensive treatment of Boolean algebra in English is A. N. Whiteheads 1898 Universal Algebra, Boolean algebra as an axiomatic algebraic structure in the modern axiomatic sense begins with a 1904 paper by Edward V. Huntington. Boolean algebra came of age as serious mathematics with the work of Marshall Stone in the 1930s, and with Garrett Birkhoffs 1940 Lattice Theory. In the 1960s, Paul Cohen, Dana Scott, and others found deep new results in mathematical logic and axiomatic set theory using offshoots of Boolean algebra, namely forcing, a Boolean algebra with only one element is called a trivial Boolean algebra or a degenerate Boolean algebra. It follows from the last three pairs of axioms above, or from the axiom, that a = b ∧ a if. The relation ≤ defined by a ≤ b if these equivalent conditions hold, is an order with least element 0. The meet a ∧ b and the join a ∨ b of two elements coincide with their infimum and supremum, respectively, with respect to ≤, the first four pairs of axioms constitute a definition of a bounded lattice. It follows from the first five pairs of axioms that any complement is unique, the set of axioms is self-dual in the sense that if one exchanges ∨ with ∧ and 0 with 1 in an axiom, the result is again an axiom. Therefore, by applying this operation to a Boolean algebra, one obtains another Boolean algebra with the same elements, furthermore, every possible input-output behavior can be modeled by a suitable Boolean expression. The smallest element 0 is the empty set and the largest element 1 is the set S itself, starting with the propositional calculus with κ sentence symbols, form the Lindenbaum algebra. This construction yields a Boolean algebra and it is in fact the free Boolean algebra on κ generators. A truth assignment in propositional calculus is then a Boolean algebra homomorphism from this algebra to the two-element Boolean algebra, interval algebras are useful in the study of Lindenbaum-Tarski algebras, every countable Boolean algebra is isomorphic to an interval algebra. For any natural n, the set of all positive divisors of n, defining a≤b if a divides b