1.
Boolean algebra (structure)
–
In abstract algebra, a Boolean algebra or Boolean lattice is a complemented distributive lattice. This type of algebraic structure captures essential properties of set operations and logic operations. A Boolean algebra can be seen as a generalization of a power set algebra or a field of sets and it is also a special case of a De Morgan algebra and a Kleene algebra. The term Boolean algebra honors George Boole, a self-educated English mathematician, booles formulation differs from that described above in some important respects. For example, conjunction and disjunction in Boole were not a pair of operations. Boolean algebra emerged in the 1860s, in papers written by William Jevons, the first systematic presentation of Boolean algebra and distributive lattices is owed to the 1890 Vorlesungen of Ernst Schröder. The first extensive treatment of Boolean algebra in English is A. N. Whiteheads 1898 Universal Algebra, Boolean algebra as an axiomatic algebraic structure in the modern axiomatic sense begins with a 1904 paper by Edward V. Huntington. Boolean algebra came of age as serious mathematics with the work of Marshall Stone in the 1930s, and with Garrett Birkhoffs 1940 Lattice Theory. In the 1960s, Paul Cohen, Dana Scott, and others found deep new results in mathematical logic and axiomatic set theory using offshoots of Boolean algebra, namely forcing, a Boolean algebra with only one element is called a trivial Boolean algebra or a degenerate Boolean algebra. It follows from the last three pairs of axioms above, or from the axiom, that a = b ∧ a if. The relation ≤ defined by a ≤ b if these equivalent conditions hold, is an order with least element 0. The meet a ∧ b and the join a ∨ b of two elements coincide with their infimum and supremum, respectively, with respect to ≤, the first four pairs of axioms constitute a definition of a bounded lattice. It follows from the first five pairs of axioms that any complement is unique, the set of axioms is self-dual in the sense that if one exchanges ∨ with ∧ and 0 with 1 in an axiom, the result is again an axiom. Therefore, by applying this operation to a Boolean algebra, one obtains another Boolean algebra with the same elements, furthermore, every possible input-output behavior can be modeled by a suitable Boolean expression. The smallest element 0 is the empty set and the largest element 1 is the set S itself, starting with the propositional calculus with κ sentence symbols, form the Lindenbaum algebra. This construction yields a Boolean algebra and it is in fact the free Boolean algebra on κ generators. A truth assignment in propositional calculus is then a Boolean algebra homomorphism from this algebra to the two-element Boolean algebra, interval algebras are useful in the study of Lindenbaum-Tarski algebras, every countable Boolean algebra is isomorphic to an interval algebra. For any natural n, the set of all positive divisors of n, defining a≤b if a divides b

2.
Propositional logic
–
Logical connectives are found in natural languages. In English for example, some examples are and, or, not”, the following is an example of a very simple inference within the scope of propositional logic, Premise 1, If its raining then its cloudy. Both premises and the conclusion are propositions, the premises are taken for granted and then with the application of modus ponens the conclusion follows. Not only that, but they will also correspond with any other inference of this form, Propositional logic may be studied through a formal system in which formulas of a formal language may be interpreted to represent propositions. A system of rules and axioms allows certain formulas to be derived. These derived formulas are called theorems and may be interpreted to be true propositions, a constructed sequence of such formulas is known as a derivation or proof and the last formula of the sequence is the theorem. The derivation may be interpreted as proof of the represented by the theorem. When a formal system is used to represent formal logic, only statement letters are represented directly, usually in truth-functional propositional logic, formulas are interpreted as having either a truth value of true or a truth value of false. Truth-functional propositional logic and systems isomorphic to it, are considered to be zeroth-order logic, although propositional logic had been hinted by earlier philosophers, it was developed into a formal logic by Chrysippus in the 3rd century BC and expanded by his successor Stoics. The logic was focused on propositions and this advancement was different from the traditional syllogistic logic which was focused on terms. However, later in antiquity, the propositional logic developed by the Stoics was no longer understood, consequently, the system was essentially reinvented by Peter Abelard in the 12th century. Propositional logic was eventually refined using symbolic logic, the 17th/18th-century mathematician Gottfried Leibniz has been credited with being the founder of symbolic logic for his work with the calculus ratiocinator. Although his work was the first of its kind, it was unknown to the larger logical community, consequently, many of the advances achieved by Leibniz were recreated by logicians like George Boole and Augustus De Morgan completely independent of Leibniz. Just as propositional logic can be considered an advancement from the earlier syllogistic logic, one author describes predicate logic as combining the distinctive features of syllogistic logic and propositional logic. Consequently, predicate logic ushered in a new era in history, however, advances in propositional logic were still made after Frege, including Natural Deduction. Natural deduction was invented by Gerhard Gentzen and Jan Łukasiewicz, Truth-Trees were invented by Evert Willem Beth. The invention of truth-tables, however, is of controversial attribution, within works by Frege and Bertrand Russell, are ideas influential to the invention of truth tables. The actual tabular structure, itself, is credited to either Ludwig Wittgenstein or Emil Post

3.
Boolean algebra
–
In mathematics and mathematical logic, Boolean algebra is the branch of algebra in which the values of the variables are the truth values true and false, usually denoted 1 and 0 respectively. It is thus a formalism for describing logical relations in the way that ordinary algebra describes numeric relations. Boolean algebra was introduced by George Boole in his first book The Mathematical Analysis of Logic, according to Huntington, the term Boolean algebra was first suggested by Sheffer in 1913. Boolean algebra has been fundamental in the development of digital electronics and it is also used in set theory and statistics. Booles algebra predated the modern developments in algebra and mathematical logic. In an abstract setting, Boolean algebra was perfected in the late 19th century by Jevons, Schröder, Huntington, in fact, M. H. Stone proved in 1936 that every Boolean algebra is isomorphic to a field of sets. Shannon already had at his disposal the abstract mathematical apparatus, thus he cast his switching algebra as the two-element Boolean algebra, in circuit engineering settings today, there is little need to consider other Boolean algebras, thus switching algebra and Boolean algebra are often used interchangeably. Efficient implementation of Boolean functions is a problem in the design of combinational logic circuits. Logic sentences that can be expressed in classical propositional calculus have an equivalent expression in Boolean algebra, thus, Boolean logic is sometimes used to denote propositional calculus performed in this way. Boolean algebra is not sufficient to capture logic formulas using quantifiers, the closely related model of computation known as a Boolean circuit relates time complexity to circuit complexity. Whereas in elementary algebra expressions denote mainly numbers, in Boolean algebra they denote the truth values false and these values are represented with the bits, namely 0 and 1. Addition and multiplication then play the Boolean roles of XOR and AND respectively, Boolean algebra also deals with functions which have their values in the set. A sequence of bits is a commonly used such function, another common example is the subsets of a set E, to a subset F of E is associated the indicator function that takes the value 1 on F and 0 outside F. The most general example is the elements of a Boolean algebra, as with elementary algebra, the purely equational part of the theory may be developed without considering explicit values for the variables. The basic operations of Boolean calculus are as follows, AND, denoted x∧y, satisfies x∧y =1 if x = y =1 and x∧y =0 otherwise. OR, denoted x∨y, satisfies x∨y =0 if x = y =0, NOT, denoted ¬x, satisfies ¬x =0 if x =1 and ¬x =1 if x =0. Alternatively the values of x∧y, x∨y, and ¬x can be expressed by tabulating their values with truth tables as follows, the first operation, x → y, or Cxy, is called material implication. If x is then the value of x → y is taken to be that of y

4.
Bent function
–
In the mathematical field of combinatorics, a bent function is a special type of Boolean function. This means it takes several inputs and gives one output, each of which has two possible values, Bent functions are so called because they are as different as possible from all linear functions and from all affine functions. This makes the bent functions naturally hard to approximate, Bent functions were defined and named in the 1960s by Oscar Rothaus in research not published until 1976. They have been studied for their applications in cryptography, but have also been applied to spread spectrum, coding theory. The definition can be extended in several ways, leading to different classes of generalized bent functions that share many of the properties of the original. It is known that V. A. Eliseev and O. P. Stepchenkov studied bent functions, however, their results have still not been declassified. Bent functions are defined in terms of the Walsh transform, + anxn is the dot product in Zn 2. Alternatively, let S0 = and S1 =, then | S0 | + | S1 | = 2n and hence f ^ = | S0 | − | S1 | =2 | S0 | −2 n. For any Boolean function f and a ∈ Zn 2 the transform lies in the range −2 n ≤ f ^ ≤2 n. Moreover, the linear function f0 = a · x and the affine function f1 = a · x +1 correspond to the two cases, since f ^0 =2 n, f ^1 = −2 n. Thus, for each a ∈ Zn 2 the value of f ^ characterizes where the function f lies in the range from f0 to f1, Rothaus defined a bent function as a Boolean function f, Zn 2 → Z2 whose Walsh transform has constant absolute value. Bent functions are in a sense equidistant from all the affine functions, the simplest examples of bent functions, written in algebraic normal form, are F = x1x2 and G = x1x2 + x3x4. This pattern continues, x1x2 + x3x4 +, + xn − 1xn is a bent function Zn 2 → Z2 for every even n, but there is a wide variety of different types of bent functions as n increases. The sequence of values f, with x ∈ Zn 2 taken in order, is called a bent sequence, bent functions. In this ±1 form, the Walsh transform is easily computed as f ^ = W f, where W is the natural-ordered Walsh matrix and the sequence is treated as a column vector. Rothaus proved that bent functions exist only for n. In fact, f ^ =2 n /2 g, in this case, g ^ =2 n /2 f, so f and g are considered dual functions. Every bent function has a Hamming weight of 2n −1 ± 2n/2 −1, so the nonlinearity of f is 2n −1 − 2n/2 −1, the maximum possible

5.
Binary decision diagram
–
In computer science, a binary decision diagram or branching program is a data structure that is used to represent a Boolean function. On a more level, BDDs can be considered as a compressed representation of sets or relations. Unlike other compressed representations, operations are performed directly on the compressed representation, other data structures used to represent a Boolean function include negation normal form, and propositional directed acyclic graph. A Boolean function can be represented as a rooted, directed, acyclic graph, there are two types of terminal nodes called 0-terminal and 1-terminal. Each decision node N is labeled by Boolean variable V N and has two child nodes called low child and high child, the edge from node V N to a low child represents an assignment of V N to 0. Such a BDD is called ordered if different variables appear in the order on all paths from the root. A BDD is said to be reduced if the two rules have been applied to its graph, Merge any isomorphic subgraphs. Eliminate any node whose two children are isomorphic, in popular usage, the term BDD almost always refers to Reduced Ordered Binary Decision Diagram. The advantage of an ROBDD is that it is canonical for a particular function and this property makes it useful in functional equivalence checking and other operations like functional technology mapping. A path from the node to the 1-terminal represents a variable assignment for which the represented Boolean function is true. As the path descends to a low child from a node, the left figure below shows a binary decision tree, and a truth table, each representing the function f. In the tree on the left, the value of the function can be determined for a variable assignment by following a path down the graph to a terminal. In the figures below, dotted lines represent edges to a low child, therefore, to find, begin at x1, traverse down the dotted line to x2, then down two solid lines. This leads to the terminal 1, which is the value of f, the binary decision tree of the left figure can be transformed into a binary decision diagram by maximally reducing it according to the two reduction rules. The resulting BDD is shown in the right figure, the basic idea from which the data structure was created is the Shannon expansion. A switching function is split into two sub-functions by assigning one variable, if such a sub-function is considered as a sub-tree, it can be represented by a binary decision tree. Binary decision diagrams were introduced by Lee, and further studied and made known by Akers, applying these two concepts results in an efficient data structure and algorithms for the representation of sets and relations. By extending the sharing to several BDDs, i. e. one sub-graph is used by several BDDs, the notion of a BDD is now generally used to refer to that particular data structure

6.
Bitwise operation
–
In digital computer programming, a bitwise operation operates on one or more bit patterns or binary numerals at the level of their individual bits. It is a fast, simple action directly supported by the processor, on simple low-cost processors, typically, bitwise operations are substantially faster than division, several times faster than multiplication, and sometimes significantly faster than addition. In the explanations below, any indication of a position is counted from the right side. For example, the binary value 0001 has zeroes at every position, the bitwise NOT, or complement, is a unary operation that performs logical negation on each bit, forming the ones complement of the given binary value. Bits that are 0 become 1, and those that are 1 become 0, for example, NOT0111 =1000 NOT10101011 =01010100 The bitwise complement is equal to the twos complement of the value minus one. If twos complement arithmetic is used, then NOT x = -x −1, for unsigned integers, the bitwise complement of a number is the mirror reflection of the number across the half-way point of the unsigned integers range. A simple but illustrative use is to invert a grayscale image where each pixel is stored as an unsigned integer. A bitwise AND takes two equal-length binary representations and performs the logical AND operation on each pair of the corresponding bits, thus, if both bits in the compared position are 1, the bit in the resulting binary representation is 1, otherwise, the result is 0. For example,0101 AND0011 =0001 The operation may be used to determine whether a bit is set or clear. This is often called bit masking, the bitwise AND may be used to clear selected bits of a register in which each bit represents an individual Boolean state. This technique is an efficient way to store a number of Boolean values using as little memory as possible, for example,0110 can be considered a set of four flags, where the first and fourth flags are clear, and the second and third flags are set. Using the example above,0110 AND0001 =0000 Because 6 AND1 is zero,6 is divisible by two and therefore even, a bitwise OR takes two bit patterns of equal length and performs the logical inclusive OR operation on each pair of corresponding bits. The result in each position is 0 if both bits are 0, while otherwise the result is 1, for example,0101 OR0011 =0111 The bitwise OR may be used to set to 1 the selected bits of the register described above. The result in each position is 1 if only the first bit is 1 or only the bit is 1. In this we perform the comparison of two bits, being 1 if the two bits are different, and 0 if they are the same, for example,0101 XOR0011 =0110 The bitwise XOR may be used to invert selected bits in a register. Any bit may be toggled by XORing it with 1, assembly language programmers sometimes use XOR as a short-cut to setting the value of a register to zero. Performing XOR on a value against itself always yields zero, and on many architectures this operation requires fewer clock cycles and memory than loading a zero value, in these operations the digits are moved, or shifted, to the left or right. In an arithmetic shift, the bits that are shifted out of either end are discarded

7.
George Boole
–
George Boole was an English mathematician, educator, philosopher and logician. He worked in the fields of differential equations and algebraic logic, Boolean logic is credited with laying the foundations for the information age. Boole was born in Lincoln, Lincolnshire, England, the son of John Boole Sr and he had a primary school education, and received lessons from his father, but had little further formal and academic teaching. William Brooke, a bookseller in Lincoln, may have helped him with Latin and he was self-taught in modern languages. At age 16 Boole became the breadwinner for his parents and three siblings, taking up a junior teaching position in Doncaster at Heighams School. Boole participated in the Mechanics Institute, in the Greyfriars, Lincoln, without a teacher, it took him many years to master calculus. At age 19, Boole successfully established his own school in Lincoln, four years later he took over Halls Academy in Waddington, outside Lincoln, following the death of Robert Hall. In 1840 he moved back to Lincoln, where he ran a boarding school, Boole became a prominent local figure, an admirer of John Kaye, the bishop. He took part in the campaign for early closing. With E. R. Larken and others he set up a society in 1847. He associated also with the Chartist Thomas Cooper, whose wife was a relation, from 1838 onwards Boole was making contacts with sympathetic British academic mathematicians and reading more widely. He studied algebra in the form of symbolic methods, as far as these were understood at the time, Booles status as mathematician was recognised by his appointment in 1849 as the first professor of mathematics at Queens College, Cork in Ireland. He met his wife, Mary Everest, there in 1850 while she was visiting her uncle John Ryall who was Professor of Greek. They married some years later in 1855 and he maintained his ties with Lincoln, working there with E. R. Larken in a campaign to reduce prostitution. Boole was awarded the Keith Medal by the Royal Society of Edinburgh in 1855 and was elected a Fellow of the Royal Society in 1857 and he received honorary degrees of LL. D. from the University of Dublin and the University of Oxford. In late November 1864, Boole walked, in rain, from his home at Lichfield Cottage in Ballintemple to the university. He soon became ill, developing a cold and high fever. As his wife believed that remedies should resemble their cause, she put her husband to bed and poured buckets of water over him – the wet having brought on his illness, Booles condition worsened and on 8 December 1864, he died of fever-induced pleural effusion

8.
Boolean ring
–
In mathematics, a Boolean ring R is a ring for which x2 = x for all x in R, such as the ring of integers modulo 2. That is, R consists only of idempotent elements, every Boolean ring gives rise to a Boolean algebra, with ring multiplication corresponding to conjunction or meet ∧, and ring addition to exclusive disjunction or symmetric difference. Boolean rings are named after the founder of Boolean algebra, George Boole, there are at least four different and incompatible systems of notation for Boolean rings and algebras. In commutative algebra the standard notation is to use x + y = ∨ for the sum of x and y. In logic, a notation is to use x ∧ y for the meet and use x ∨ y for the join. In set theory and logic it is common to use x · y for the meet. This use of + is different from the use in ring theory, a rare convention is to use xy for the product and x ⊕ y for the ring sum, in an effort to avoid the ambiguity of +. Historically, the term Boolean ring has been used to mean a Boolean ring possibly without an identity, one example of a Boolean ring is the power set of any set X, where the addition in the ring is symmetric difference, and the multiplication is intersection. As another example, we can consider the set of all finite or cofinite subsets of X, again with symmetric difference. More generally with these operations any field of sets is a Boolean ring, by Stones representation theorem every Boolean ring is isomorphic to a field of sets. Since the join operation ∨ in a Boolean algebra is written additively, it makes sense in this context to denote ring addition by ⊕. Given a Boolean ring R, for x and y in R we can define x ∧ y = xy, x ∨ y = x ⊕ y ⊕ xy and these operations then satisfy all of the axioms for meets, joins, and complements in a Boolean algebra. Thus every Boolean ring becomes a Boolean algebra, similarly, every Boolean algebra becomes a Boolean ring thus, xy = x ∧ y, x ⊕ y = ∧ ¬. If a Boolean ring is translated into a Boolean algebra in this way, and then the Boolean algebra is translated into a ring, the analogous result holds beginning with a Boolean algebra. A map between two Boolean rings is a homomorphism if and only if it is a homomorphism of the corresponding Boolean algebras. Furthermore, a subset of a Boolean ring is an ideal if. The quotient ring of a Boolean ring modulo a ring ideal corresponds to the algebra of the corresponding Boolean algebra modulo the corresponding order ideal. The property x ⊕ x =0 shows that any Boolean ring is an algebra over the field F2 with two elements, in just one way

9.
Boolean satisfiability problem
–
In computer science, the Boolean Satisfiability Problem is the problem of determining if there exists an interpretation that satisfies a given Boolean formula. In other words, it asks whether the variables of a given Boolean formula can be replaced by the values TRUE or FALSE in such a way that the formula evaluates to TRUE. If this is the case, the formula is called satisfiable, on the other hand, if no such assignment exists, the function expressed by the formula is FALSE for all possible variable assignments and the formula is unsatisfiable. For example, the formula a AND NOT b is satisfiable because one can find the values a = TRUE and b = FALSE, in contrast, a AND NOT a is unsatisfiable. SAT is one of the first problems that was proven to be NP-complete and this means that all problems in the complexity class NP, which includes a wide range of natural decision and optimization problems, are at most as difficult to solve as SAT. g. Artificial intelligence, circuit design, and automatic theorem proving, a propositional logic formula, also called Boolean expression, is built from variables, operators AND, OR, NOT, and parentheses. A formula is said to be if it can be made TRUE by assigning appropriate logical values to its variables. The Boolean satisfiability problem is, given a formula, to whether it is satisfiable. This decision problem is of importance in various areas of computer science, including theoretical computer science, complexity theory, algorithmics, cryptography. There are several cases of the Boolean satisfiability problem in which the formulas are required to have a particular structure. A literal is either a variable, then called positive literal, or the negation of a variable, a clause is a disjunction of literals. A clause is called a Horn clause if it contains at most one positive literal, a formula is in conjunctive normal form if it is a conjunction of clauses. The formula is satisfiable, choosing x1 = FALSE, x2 = FALSE, and x3 arbitrarily, since ∧ ∧ ¬FALSE evaluates to ∧ ∧ TRUE, and in turn to TRUE ∧ TRUE ∧ TRUE. In contrast, the CNF formula a ∧ ¬a, consisting of two clauses of one literal, is unsatisfiable, since for a=TRUE and a=FALSE it evaluates to TRUE ∧ ¬TRUE and FALSE ∧ ¬FALSE, different sets of allowed boolean operators lead to different problem versions. As an example, R is a clause, and R ∧ R ∧ R is a generalized conjunctive normal form. This formula is used below, with R being the operator that is TRUE just if exactly one of its arguments is. Using the laws of Boolean algebra, every propositional logic formula can be transformed into an equivalent conjunctive normal form, for example, transforming the formula ∨ ∨. ∨ into conjunctive normal form yields ∧ ∧ ∧ ∧, ∧ ∧ ∧ ∧, while the former is a disjunction of n conjunctions of 2 variables, the latter consists of 2n clauses of n variables

10.
De Morgan's laws
–
In propositional logic and boolean algebra, De Morgans laws are a pair of transformation rules that are both valid rules of inference. They are named after Augustus De Morgan, a 19th-century British mathematician, the rules allow the expression of conjunctions and disjunctions purely in terms of each other via negation. Applications of the rules include simplification of logical expressions in computer programs, De Morgans laws are an example of a more general concept of mathematical duality. The negation of conjunction rule may be written in sequent notation, the negation of disjunction rule may be written as, ¬ ⊢. De Morgans laws are shown in the compact form above, with negation of the output on the left. A clearer form for substitution can be stated as, ≡ ¬, ≡ ¬ and this emphasizes the need to invert both the inputs and the output, as well as change the operator, when doing a substitution. In set notation, De Morgans laws can be remembered using the mnemonic break the line, De Morgan’s laws commonly apply to text searching using Boolean operators AND, OR, and NOT. Consider a set of documents containing the words “cars” and “trucks”, Document 3, Contains both “cars” and “trucks”. Document 4, Contains neither “cars” nor “trucks”, to evaluate Search A, clearly the search “” will hit on Documents 1,2, and 3. So the negation of that search will hit everything else, which is Document 4, evaluating Search B, the search “” will hit on documents that do not contain “cars”, which is Documents 2 and 4. Similarly the search “” will hit on Documents 1 and 4, applying the AND operator to these two searches will hit on the documents that are common to these two searches, which is Document 4. A similar evaluation can be applied to show that the two searches will return the same set of documents, Search C, NOT, Search D. The laws are named after Augustus De Morgan, who introduced a version of the laws to classical propositional logic. De Morgans formulation was influenced by algebraization of logic undertaken by George Boole, nevertheless, a similar observation was made by Aristotle, and was known to Greek and Medieval logicians. For example, in the 14th century, William of Ockham wrote down the words that would result by reading the laws out, jean Buridan, in his Summulae de Dialectica, also describes rules of conversion that follow the lines of De Morgans laws. Still, De Morgan is given credit for stating the laws in the terms of formal logic. De Morgans laws can be proved easily, and may seem trivial. Nonetheless, these laws are helpful in making inferences in proofs

11.
Free Boolean algebra
–
The generators of a free Boolean algebra can represent independent propositions. Consider, for example, the propositions John is tall and Mary is rich, other elements of the Boolean algebra are then logical disjunctions of the atoms, such as John is tall and Mary is not rich, or John is not tall and Mary is rich. In addition there is one element, FALSE, which can be thought of as the empty disjunction, that is. This example yields a Boolean algebra with 16 elements, in general, for n, the free Boolean algebra with n generators has 2n atoms. If there are infinitely many generators, a similar situation prevails except that now there are no atoms, each element of the Boolean algebra is a combination of finitely many of the generating propositions, with two such elements deemed identical if they are logically equivalent. In fact, this generalizes to any algebraic structure definable in the framework of universal algebra. Above, we said that a free Boolean algebra is a Boolean algebra with a set of generators that behave a certain way, alternatively, one start with a set. Every set X generates a free Boolean algebra FX defined as the algebra such that for every algebra B and function f, X → B, there is a unique Boolean algebra homomorphism f′, diagrammatically, where iX is the inclusion, and the dashed arrow denotes uniqueness. The idea is that one chooses where to send the elements of X. If FX contained elements inexpressible as combinations of elements of X, then f′ wouldnt be unique and it is easily shown that FX is unique, so this definition makes sense. It is also shown that a free Boolean algebra with generating set X, as defined originally, is isomorphic to FX. One shortcoming of the definition is that the diagram doesnt capture that f′ is a homomorphism. We can fix this by separating it into two diagrams, one in BA and one in Set, to relate the two, we introduce a functor U, BA → Set that forgets the algebraic structure, mapping algebras and homomorphisms to their underlying sets and functions. The functor U can be thought of as a device to pull the homomorphism f′ back into Set so it can be related to f, the remarkable aspect of this is that the latter diagram is one of the various definitions of when two functors are adjoint. Our F easily extends to a functor Set → BA, for each α<κ, the αth generator is the set of all elements of κ whose αth coordinate is 1. In particular, the free Boolean algebra with ℵ0 generators is the collection of all subsets of a Cantor space. For more on this approach to free Boolean algebra, see Stones representation theorem for Boolean algebras. Boolean algebra Generating set Steve Awodey Category Theory, paul Halmos and Steven Givant Logic as Algebra

12.
Karnaugh map
–
The Karnaugh map, also known as the K-map, is a method to simplify boolean algebra expressions. Maurice Karnaugh introduced it in 1953 as a refinement of Edward Veitchs 1952 Veitch diagram, the Karnaugh map reduces the need for extensive calculations by taking advantage of humans pattern-recognition capability. It also permits the identification and elimination of potential race conditions. Optimal groups of 1s or 0s are identified, which represent the terms of a form of the logic in the original truth table. These terms can be used to write a minimal boolean expression representing the required logic, Karnaugh maps are used to simplify real-world logic requirements so that they can be implemented using a minimum number of physical logic gates. A sum-of-products expression can always be implemented using AND gates feeding into an OR gate, Karnaugh maps can also be used to simplify logic expressions in software design. Boolean conditions, as used for example in conditional statements, can get very complicated, once minimised, canonical sum-of-products and product-of-sums expressions can be implemented directly using AND and OR logic operators. Karnaugh maps are used to facilitate the simplification of Boolean algebra functions, for example, consider the Boolean function described by the following truth table. Following are two different notations describing the function in unsimplified Boolean algebra, using the Boolean variables A, B, C, D. F = ∑ m i, i ∈ where m i are the minterms to map, F = ∏ M i, i ∈ where M i are the maxterms to map. In the example above, the four input variables can be combined in 16 different ways, so the table has 16 rows. The Karnaugh map is arranged in a 4 ×4 grid. The row and column indices are ordered in Gray code rather than binary numerical order, Gray code ensures that only one variable changes between each pair of adjacent cells. Each cell of the completed Karnaugh map contains a binary digit representing the output for that combination of inputs. After the Karnaugh map has been constructed, it is used to one of the simplest possible forms — a canonical form — for the information in the truth table. Adjacent 1s in the Karnaugh map represent opportunities to simplify the expression, the minterms for the final expression are found by encircling groups of 1s in the map. Minterm groups must be rectangular and must have an area that is a power of two, minterm rectangles should be as large as possible without containing any 0s. Groups may overlap in order to each one larger