1.
Euler diagram
–
An Euler diagram is a diagrammatic means of representing sets and their relationships. Typically they involve overlapping shapes, and may be scaled, such that the area of the shape is proportional to the number of elements it contains and they are particularly useful for explaining complex hierarchies and overlapping definitions. They are often confused with the Venn diagrams, unlike Venn diagrams which show all possible relations between different sets, the Euler diagram shows only relevant relationships. The first use of Eulerian circles is commonly attributed to Swiss mathematician Leonhard Euler, in the United States, both Venn and Euler diagrams were incorporated as part of instruction in set theory as part of the new math movement of the 1960s. Since then, they have also adopted by other curriculum fields such as reading as well as organizations. Euler diagrams consist of simple closed shapes in a two dimensional plane that depict a set or category. How or if these shapes overlap demonstrates the relationships between the sets, there are only 3 possible relationships between any 2 sets, completely inclusive, partially inclusive, and exclusive. This is also referred to as containment, overlap or neither or, especially in mathematics, it may be referred to as subset, intersection, curves whose interior zones do not intersect represent disjoint sets. Two curves whose interior zones intersect represent sets that have common elements, a curve that is contained completely within the interior zone of another represents a subset of it. Venn diagrams are a more form of Euler diagrams. A Venn diagram must contain all 2n logically possible zones of overlap between its n curves, representing all combinations of inclusion/exclusion of its constituent sets. Regions not part of the set are indicated by coloring them black, in contrast to Euler diagrams, when the number of sets grows beyond 3 a Venn diagram becomes visually complex, especially compared to the corresponding Euler diagram. The difference between Euler and Venn diagrams can be seen in the following example, the Venn diagram, which uses the same categories of Animal, Mineral, and Four Legs, does not encapsulate these relationships. Traditionally the emptiness of a set in Venn diagrams is depicted by shading in the region, Euler diagrams represent emptiness either by shading or by the absence of a region. Often a set of conditions are imposed, these are topological or geometric constraints imposed on the structure of the diagram. For example, connectedness of zones might be enforced, or concurrency of curves or multiple points might be banned, in the adjacent diagram, examples of small Venn diagrams are transformed into Euler diagrams by sequences of transformations, some of the intermediate diagrams have concurrency of curves. However, this sort of transformation of a Venn diagram with shading into an Euler diagram without shading is not always possible. There are examples of Euler diagrams with 9 sets that are not drawable using simple closed curves without the creation of unwanted zones since they would have to have non-planar dual graphs
2.
Computer science
–
Computer science is the study of the theory, experimentation, and engineering that form the basis for the design and use of computers. An alternate, more succinct definition of science is the study of automating algorithmic processes that scale. A computer scientist specializes in the theory of computation and the design of computational systems and its fields can be divided into a variety of theoretical and practical disciplines. Some fields, such as computational complexity theory, are highly abstract, other fields still focus on challenges in implementing computation. Human–computer interaction considers the challenges in making computers and computations useful, usable, the earliest foundations of what would become computer science predate the invention of the modern digital computer. Machines for calculating fixed numerical tasks such as the abacus have existed since antiquity, further, algorithms for performing computations have existed since antiquity, even before the development of sophisticated computing equipment. Wilhelm Schickard designed and constructed the first working mechanical calculator in 1623, in 1673, Gottfried Leibniz demonstrated a digital mechanical calculator, called the Stepped Reckoner. He may be considered the first computer scientist and information theorist, for, among other reasons and he started developing this machine in 1834, and in less than two years, he had sketched out many of the salient features of the modern computer. A crucial step was the adoption of a card system derived from the Jacquard loom making it infinitely programmable. Around 1885, Herman Hollerith invented the tabulator, which used punched cards to process statistical information, when the machine was finished, some hailed it as Babbages dream come true. During the 1940s, as new and more powerful computing machines were developed, as it became clear that computers could be used for more than just mathematical calculations, the field of computer science broadened to study computation in general. Computer science began to be established as an academic discipline in the 1950s. The worlds first computer science program, the Cambridge Diploma in Computer Science. The first computer science program in the United States was formed at Purdue University in 1962. Since practical computers became available, many applications of computing have become distinct areas of study in their own rights and it is the now well-known IBM brand that formed part of the computer science revolution during this time. IBM released the IBM704 and later the IBM709 computers, still, working with the IBM was frustrating if you had misplaced as much as one letter in one instruction, the program would crash, and you would have to start the whole process over again. During the late 1950s, the science discipline was very much in its developmental stages. Time has seen significant improvements in the usability and effectiveness of computing technology, modern society has seen a significant shift in the users of computer technology, from usage only by experts and professionals, to a near-ubiquitous user base
3.
Turing machine
–
Despite the models simplicity, given any computer algorithm, a Turing machine can be constructed that is capable of simulating that algorithms logic. The machine operates on an infinite memory tape divided into discrete cells, the machine positions its head over a cell and reads the symbol there. The Turing machine was invented in 1936 by Alan Turing, who called it an a-machine, thus, Turing machines prove fundamental limitations on the power of mechanical computation. Turing completeness is the ability for a system of instructions to simulate a Turing machine, a Turing machine is a general example of a CPU that controls all data manipulation done by a computer, with the canonical machine using sequential memory to store data. More specifically, it is a capable of enumerating some arbitrary subset of valid strings of an alphabet. Assuming a black box, the Turing machine cannot know whether it will eventually enumerate any one specific string of the subset with a given program and this is due to the fact that the halting problem is unsolvable, which has major implications for the theoretical limits of computing. The Turing machine is capable of processing an unrestricted grammar, which implies that it is capable of robustly evaluating first-order logic in an infinite number of ways. This is famously demonstrated through lambda calculus, a Turing machine that is able to simulate any other Turing machine is called a universal Turing machine. The thesis states that Turing machines indeed capture the notion of effective methods in logic and mathematics. Studying their abstract properties yields many insights into computer science and complexity theory, at any moment there is one symbol in the machine, it is called the scanned symbol. The machine can alter the scanned symbol, and its behavior is in part determined by that symbol, however, the tape can be moved back and forth through the machine, this being one of the elementary operations of the machine. Any symbol on the tape may therefore eventually have an innings, the Turing machine mathematically models a machine that mechanically operates on a tape. On this tape are symbols, which the machine can read and write, one at a time, in the original article, Turing imagines not a mechanism, but a person whom he calls the computer, who executes these deterministic mechanical rules slavishly. If δ is not defined on the current state and the current tape symbol, Q0 ∈ Q is the initial state F ⊆ Q is the set of final or accepting states. The initial tape contents is said to be accepted by M if it eventually halts in a state from F, Anything that operates according to these specifications is a Turing machine. The 7-tuple for the 3-state busy beaver looks like this, Q = Γ = b =0 Σ = q 0 = A F = δ = see state-table below Initially all tape cells are marked with 0. In the words of van Emde Boas, p.6, The set-theoretical object provides only partial information on how the machine will behave and what its computations will look like. For instance, There will need to be many decisions on what the symbols actually look like, and a failproof way of reading and writing symbols indefinitely
4.
Mathematical proof
–
In mathematics, a proof is an inferential argument for a mathematical statement. In the argument, other previously established statements, such as theorems, in principle, a proof can be traced back to self-evident or assumed statements, known as axioms, along with accepted rules of inference. Axioms may be treated as conditions that must be met before the statement applies, Proofs are examples of exhaustive deductive reasoning or inductive reasoning and are distinguished from empirical arguments or non-exhaustive inductive reasoning. A proof must demonstrate that a statement is true, rather than enumerate many confirmatory cases. An unproved proposition that is believed to be true is known as a conjecture, Proofs employ logic but usually include some amount of natural language which usually admits some ambiguity. In fact, the vast majority of proofs in mathematics can be considered as applications of rigorous informal logic. Purely formal proofs, written in language instead of natural language, are considered in proof theory. The distinction between formal and informal proofs has led to examination of current and historical mathematical practice, quasi-empiricism in mathematics. The philosophy of mathematics is concerned with the role of language and logic in proofs, the word proof comes from the Latin probare meaning to test. Related modern words are the English probe, probation, and probability, the Spanish probar, Italian provare, the early use of probity was in the presentation of legal evidence. A person of authority, such as a nobleman, was said to have probity, whereby the evidence was by his relative authority, plausibility arguments using heuristic devices such as pictures and analogies preceded strict mathematical proof. It is likely that the idea of demonstrating a conclusion first arose in connection with geometry, the development of mathematical proof is primarily the product of ancient Greek mathematics, and one of the greatest achievements thereof. Thales proved some theorems in geometry, eudoxus and Theaetetus formulated theorems but did not prove them. Aristotle said definitions should describe the concept being defined in terms of other concepts already known and his book, the Elements, was read by anyone who was considered educated in the West until the middle of the 20th century. Further advances took place in medieval Islamic mathematics, while earlier Greek proofs were largely geometric demonstrations, the development of arithmetic and algebra by Islamic mathematicians allowed more general proofs that no longer depended on geometry. In the 10th century CE, the Iraqi mathematician Al-Hashimi provided general proofs for numbers as he considered multiplication, division and he used this method to provide a proof of the existence of irrational numbers. An inductive proof for arithmetic sequences was introduced in the Al-Fakhri by Al-Karaji, alhazen also developed the method of proof by contradiction, as the first attempt at proving the Euclidean parallel postulate. Modern proof theory treats proofs as inductively defined data structures, there is no longer an assumption that axioms are true in any sense, this allows for parallel mathematical theories built on alternate sets of axioms
5.
Algorithm
–
In mathematics and computer science, an algorithm is a self-contained sequence of actions to be performed. Algorithms can perform calculation, data processing and automated reasoning tasks, an algorithm is an effective method that can be expressed within a finite amount of space and time and in a well-defined formal language for calculating a function. The transition from one state to the next is not necessarily deterministic, some algorithms, known as randomized algorithms, giving a formal definition of algorithms, corresponding to the intuitive notion, remains a challenging problem. In English, it was first used in about 1230 and then by Chaucer in 1391, English adopted the French term, but it wasnt until the late 19th century that algorithm took on the meaning that it has in modern English. Another early use of the word is from 1240, in a manual titled Carmen de Algorismo composed by Alexandre de Villedieu and it begins thus, Haec algorismus ars praesens dicitur, in qua / Talibus Indorum fruimur bis quinque figuris. Which translates as, Algorism is the art by which at present we use those Indian figures, the poem is a few hundred lines long and summarizes the art of calculating with the new style of Indian dice, or Talibus Indorum, or Hindu numerals. An informal definition could be a set of rules that precisely defines a sequence of operations, which would include all computer programs, including programs that do not perform numeric calculations. Generally, a program is only an algorithm if it stops eventually, but humans can do something equally useful, in the case of certain enumerably infinite sets, They can give explicit instructions for determining the nth member of the set, for arbitrary finite n. An enumerably infinite set is one whose elements can be put into one-to-one correspondence with the integers, the concept of algorithm is also used to define the notion of decidability. That notion is central for explaining how formal systems come into being starting from a set of axioms. In logic, the time that an algorithm requires to complete cannot be measured, from such uncertainties, that characterize ongoing work, stems the unavailability of a definition of algorithm that suits both concrete and abstract usage of the term. Algorithms are essential to the way computers process data, thus, an algorithm can be considered to be any sequence of operations that can be simulated by a Turing-complete system. Although this may seem extreme, the arguments, in its favor are hard to refute. Gurevich. Turings informal argument in favor of his thesis justifies a stronger thesis, according to Savage, an algorithm is a computational process defined by a Turing machine. Typically, when an algorithm is associated with processing information, data can be read from a source, written to an output device. Stored data are regarded as part of the state of the entity performing the algorithm. In practice, the state is stored in one or more data structures, for some such computational process, the algorithm must be rigorously defined, specified in the way it applies in all possible circumstances that could arise. That is, any conditional steps must be dealt with, case-by-case
6.
Travelling salesman problem
–
It is an NP-hard problem in combinatorial optimization, important in operations research and theoretical computer science. TSP is a case of the travelling purchaser problem and the vehicle routing problem. In the theory of computational complexity, the version of the TSP belongs to the class of NP-complete problems. Thus, it is possible that the running time for any algorithm for the TSP increases superpolynomially with the number of cities. The problem was first formulated in 1930 and is one of the most intensively studied problems in optimization and it is used as a benchmark for many optimization methods. The TSP has several applications even in its purest formulation, such as planning, logistics, slightly modified, it appears as a sub-problem in many areas, such as DNA sequencing. The TSP also appears in astronomy, as observing many sources will want to minimize the time spent moving the telescope between the sources. In many applications, additional constraints such as limited resources or time windows may be imposed, the origins of the travelling salesperson problem are unclear. A handbook for travelling salesmen from 1832 mentions the problem and includes example tours through Germany and Switzerland, the travelling salesperson problem was mathematically formulated in the 1800s by the Irish mathematician W. R. Hamilton and by the British mathematician Thomas Kirkman. Hamilton’s Icosian Game was a puzzle based on finding a Hamiltonian cycle. Hassler Whitney at Princeton University introduced the name travelling salesman problem soon after, Dantzig, Fulkerson and Johnson, however, speculated that given a near optimal solution we may be able to find optimality or prove optimality by adding a small amount of extra inequalities. They used this idea to solve their initial 49 city problem using a string model and they found they only needed 26 cuts to come to a solution for their 49 city problem. As well as cutting plane methods, Dantzig, Fulkerson and Johnson used branch, in the following decades, the problem was studied by many researchers from mathematics, computer science, chemistry, physics, and other sciences. Christofides made a big advance in this approach of giving an approach for which we know the worst-case scenario and his algorithm given in 1976, at worst is 1.5 times longer than the optimal solution. As the algorithm was so simple and quick, many hoped it would give way to a optimal solution method. However, until 2011 when it was beaten by less than a billionth of a percent, Richard M. Karp showed in 1972 that the Hamiltonian cycle problem was NP-complete, which implies the NP-hardness of TSP. This supplied a mathematical explanation for the apparent computational difficulty of finding optimal tours, great progress was made in the late 1970s and 1980, when Grötschel, Padberg, Rinaldi and others managed to exactly solve instances with up to 2392 cities, using cutting planes and branch-and-bound. In the 1990s, Applegate, Bixby, Chvátal, and Cook developed the program Concorde that has used in many recent record solutions
7.
Computational complexity theory
–
A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying the amount of resources needed to solve them, such as time and storage. Other complexity measures are used, such as the amount of communication, the number of gates in a circuit. One of the roles of computational complexity theory is to determine the limits on what computers can. Closely related fields in computer science are analysis of algorithms. More precisely, computational complexity theory tries to classify problems that can or cannot be solved with appropriately restricted resources, a computational problem can be viewed as an infinite collection of instances together with a solution for every instance. The input string for a problem is referred to as a problem instance. In computational complexity theory, a problem refers to the question to be solved. In contrast, an instance of this problem is a rather concrete utterance, for example, consider the problem of primality testing. The instance is a number and the solution is yes if the number is prime, stated another way, the instance is a particular input to the problem, and the solution is the output corresponding to the given input. For this reason, complexity theory addresses computational problems and not particular problem instances, when considering computational problems, a problem instance is a string over an alphabet. Usually, the alphabet is taken to be the binary alphabet, as in a real-world computer, mathematical objects other than bitstrings must be suitably encoded. For example, integers can be represented in binary notation, and graphs can be encoded directly via their adjacency matrices and this can be achieved by ensuring that different representations can be transformed into each other efficiently. Decision problems are one of the objects of study in computational complexity theory. A decision problem is a type of computational problem whose answer is either yes or no. A decision problem can be viewed as a language, where the members of the language are instances whose output is yes. The objective is to decide, with the aid of an algorithm, if the algorithm deciding this problem returns the answer yes, the algorithm is said to accept the input string, otherwise it is said to reject the input. An example of a problem is the following
8.
Set (mathematics)
–
In mathematics, a set is a well-defined collection of distinct objects, considered as an object in its own right. For example, the numbers 2,4, and 6 are distinct objects when considered separately, Sets are one of the most fundamental concepts in mathematics. Developed at the end of the 19th century, set theory is now a part of mathematics. In mathematics education, elementary topics such as Venn diagrams are taught at a young age, the German word Menge, rendered as set in English, was coined by Bernard Bolzano in his work The Paradoxes of the Infinite. A set is a collection of distinct objects. The objects that make up a set can be anything, numbers, people, letters of the alphabet, other sets, Sets are conventionally denoted with capital letters. Sets A and B are equal if and only if they have precisely the same elements. Cantors definition turned out to be inadequate, instead, the notion of a set is taken as a notion in axiomatic set theory. There are two ways of describing, or specifying the members of, a set, one way is by intensional definition, using a rule or semantic description, A is the set whose members are the first four positive integers. B is the set of colors of the French flag, the second way is by extension – that is, listing each member of the set. An extensional definition is denoted by enclosing the list of members in curly brackets, one often has the choice of specifying a set either intensionally or extensionally. In the examples above, for instance, A = C and B = D, there are two important points to note about sets. First, in a definition, a set member can be listed two or more times, for example. However, per extensionality, two definitions of sets which differ only in one of the definitions lists set members multiple times, define, in fact. Hence, the set is identical to the set. The second important point is that the order in which the elements of a set are listed is irrelevant and we can illustrate these two important points with an example, = =. For sets with many elements, the enumeration of members can be abbreviated, for instance, the set of the first thousand positive integers may be specified extensionally as, where the ellipsis indicates that the list continues in the obvious way. Ellipses may also be used where sets have infinitely many members, thus the set of positive even numbers can be written as
9.
Integer
–
An integer is a number that can be written without a fractional component. For example,21,4,0, and −2048 are integers, while 9.75, 5 1⁄2, the set of integers consists of zero, the positive natural numbers, also called whole numbers or counting numbers, and their additive inverses. This is often denoted by a boldface Z or blackboard bold Z standing for the German word Zahlen, ℤ is a subset of the sets of rational and real numbers and, like the natural numbers, is countably infinite. The integers form the smallest group and the smallest ring containing the natural numbers, in algebraic number theory, the integers are sometimes called rational integers to distinguish them from the more general algebraic integers. In fact, the integers are the integers that are also rational numbers. Like the natural numbers, Z is closed under the operations of addition and multiplication, that is, however, with the inclusion of the negative natural numbers, and, importantly,0, Z is also closed under subtraction. The integers form a ring which is the most basic one, in the following sense, for any unital ring. This universal property, namely to be an object in the category of rings. Z is not closed under division, since the quotient of two integers, need not be an integer, although the natural numbers are closed under exponentiation, the integers are not. The following lists some of the properties of addition and multiplication for any integers a, b and c. In the language of algebra, the first five properties listed above for addition say that Z under addition is an abelian group. As a group under addition, Z is a cyclic group, in fact, Z under addition is the only infinite cyclic group, in the sense that any infinite cyclic group is isomorphic to Z. The first four properties listed above for multiplication say that Z under multiplication is a commutative monoid. However, not every integer has an inverse, e. g. there is no integer x such that 2x =1, because the left hand side is even. This means that Z under multiplication is not a group, all the rules from the above property table, except for the last, taken together say that Z together with addition and multiplication is a commutative ring with unity. It is the prototype of all objects of algebraic structure. Only those equalities of expressions are true in Z for all values of variables, note that certain non-zero integers map to zero in certain rings. The lack of zero-divisors in the means that the commutative ring Z is an integral domain
10.
Boolean satisfiability problem
–
In computer science, the Boolean Satisfiability Problem is the problem of determining if there exists an interpretation that satisfies a given Boolean formula. In other words, it asks whether the variables of a given Boolean formula can be replaced by the values TRUE or FALSE in such a way that the formula evaluates to TRUE. If this is the case, the formula is called satisfiable, on the other hand, if no such assignment exists, the function expressed by the formula is FALSE for all possible variable assignments and the formula is unsatisfiable. For example, the formula a AND NOT b is satisfiable because one can find the values a = TRUE and b = FALSE, in contrast, a AND NOT a is unsatisfiable. SAT is one of the first problems that was proven to be NP-complete and this means that all problems in the complexity class NP, which includes a wide range of natural decision and optimization problems, are at most as difficult to solve as SAT. g. Artificial intelligence, circuit design, and automatic theorem proving, a propositional logic formula, also called Boolean expression, is built from variables, operators AND, OR, NOT, and parentheses. A formula is said to be if it can be made TRUE by assigning appropriate logical values to its variables. The Boolean satisfiability problem is, given a formula, to whether it is satisfiable. This decision problem is of importance in various areas of computer science, including theoretical computer science, complexity theory, algorithmics, cryptography. There are several cases of the Boolean satisfiability problem in which the formulas are required to have a particular structure. A literal is either a variable, then called positive literal, or the negation of a variable, a clause is a disjunction of literals. A clause is called a Horn clause if it contains at most one positive literal, a formula is in conjunctive normal form if it is a conjunction of clauses. The formula is satisfiable, choosing x1 = FALSE, x2 = FALSE, and x3 arbitrarily, since ∧ ∧ ¬FALSE evaluates to ∧ ∧ TRUE, and in turn to TRUE ∧ TRUE ∧ TRUE. In contrast, the CNF formula a ∧ ¬a, consisting of two clauses of one literal, is unsatisfiable, since for a=TRUE and a=FALSE it evaluates to TRUE ∧ ¬TRUE and FALSE ∧ ¬FALSE, different sets of allowed boolean operators lead to different problem versions. As an example, R is a clause, and R ∧ R ∧ R is a generalized conjunctive normal form. This formula is used below, with R being the operator that is TRUE just if exactly one of its arguments is. Using the laws of Boolean algebra, every propositional logic formula can be transformed into an equivalent conjunctive normal form, for example, transforming the formula ∨ ∨. ∨ into conjunctive normal form yields ∧ ∧ ∧ ∧, ∧ ∧ ∧ ∧, while the former is a disjunction of n conjunctions of 2 variables, the latter consists of 2n clauses of n variables
11.
Nondeterministic algorithm
–
In computer science, a nondeterministic algorithm is an algorithm that, even for the same input, can exhibit different behaviors on different runs, as opposed to a deterministic algorithm. There are several ways an algorithm may behave differently from run to run, a concurrent algorithm can perform differently on different runs due to a race condition. A probabilistic algorithms behaviors depends on a number generator. An algorithm that solves a problem in polynomial time can run in polynomial time or exponential time depending on the choices it makes during execution. The nondeterministic algorithms are used to find an approximation to a solution. The notion was introduced by Robert W. Floyd, often in computational theory, the term algorithm refers to a deterministic algorithm. A nondeterministic algorithm is different from its more familiar deterministic counterpart in its ability to arrive at outcomes using various routes and this property is captured mathematically in nondeterministic models of computation such as the nondeterministic finite automaton. In some scenarios, all paths are allowed to run simultaneously. In algorithm design, nondeterministic algorithms are used when the problem solved by the algorithm inherently allows multiple outcomes. Crucially, every outcome the nondeterministic algorithm produces is valid, regardless of which choices the algorithm makes while running, in computational complexity theory, nondeterministic algorithms are ones that, at every possible step, can allow for multiple continuations. These algorithms do not arrive at a solution for every possible path, however. The choices can be interpreted as guesses in a search process, a large number of problems can be conceptualized through nondeterministic algorithms, including the most famous unresolved question in computing theory, P vs NP. One way to simulate a nondeterministic algorithm N using a deterministic algorithm D is to treat sets of states of N as states of D and this means that D simultaneously traces all the possible execution paths of N. Another is randomization, which consists of letting all choices be determined by a number generator. The result is called a deterministic algorithm. The problem, given a number n larger than two, determine whether it is prime. A nondeterministic algorithm for this problem is the based on Fermats little theorem, Repeat thirty times. If a n −1 ≠1, return answer composite Return answer probably prime, if this algorithm returns the answer composite then the number is certainly not prime
12.
Complement (set theory)
–
In set theory, the complement of a set A refers to elements not in A. The relative complement of A with respect to a set B, also termed the difference of sets A and B, written B ∖ A, is the set of elements in B but not in A. When all sets under consideration are considered to be subsets of a given set U, the absolute complement of A is the set of elements in U but not in A. If A and B are sets, then the complement of A in B, also termed the set-theoretic difference of B and A, is the set of elements in B. The relative complement of A in B is denoted B ∖ A according to the ISO 31-11 standard, if R is the set of real numbers and Q is the set of rational numbers, then R ∖ Q is the set of irrational numbers. Let A, B, and C be three sets, the following identities capture notable properties of relative complements, C ∖ = ∪. C ∖ = ∪, with the important special case C ∖ = demonstrating that intersection can be expressed using only the relative complement operation. If A is a set, then the complement of A is the set of elements not in A. Formally. The absolute complement of A is usually denoted by A ∁, other notations include A c, A ¯, A ′, ∁ U A, and ∁ A. Assume that the universe is the set of integers, if A is the set of odd numbers, then the complement of A is the set of even numbers. If B is the set of multiples of 3, then the complement of B is the set of numbers congruent to 1 or 2 modulo 3, assume that the universe is the standard 52-card deck. If the set A is the suit of spades, then the complement of A is the union of the suits of clubs, diamonds, and hearts. If the set B is the union of the suits of clubs and diamonds, then the complement of B is the union of the suits of hearts, let A and B be two sets in a universe U. The following identities capture important properties of complements, De Morgans laws. Complement laws, A ∪ A ∁ = U, if A ⊂ B, then B ∁ ⊂ A ∁. Involution or double complement law, ∁ = A, relationships between relative and absolute complements, A ∖ B = A ∩ B ∁. Relationship with set difference, A ∁ ∖ B ∁ = B ∖ A, the first two complement laws above show that if A is a non-empty, proper subset of U, then is a partition of U. In the LaTeX typesetting language, the command \setminus is usually used for rendering a set difference symbol, when rendered, the \setminus command looks identical to \backslash except that it has a little more space in front and behind the slash, akin to the LaTeX sequence \mathbin