Subset
In mathematics, a set A is a subset of a set B, or equivalently B is a superset of A, if A is "contained" inside B, that is, all elements of A are elements of B. A and B may coincide; the relationship of one set being a subset of another is called inclusion or sometimes containment. A is a subset of B may be expressed as B includes A; the subset relation defines a partial order on sets. The algebra of subsets forms a Boolean algebra. If A and B are sets and every element of A is an element of B, then: A is a subset of B, denoted by A ⊆ B, or equivalently B is a superset of A, denoted by B ⊇ A. If A is a subset of B, but A is not equal to B A is a proper subset of B. For any set S, the inclusion relation ⊆ is a partial order on the set P of all subsets of S defined by A ≤ B ⟺ A ⊆ B. We may partially order P by reverse set inclusion by defining A ≤ B ⟺ B ⊆ A; when quantified, A ⊆ B is represented as: ∀x. A set A is a subset of B if and only if their intersection is equal to A. Formally: A ⊆ B ⇔ A ∩ B = A.
A set A is a subset of B if and only if their union is equal to B. Formally: A ⊆ B ⇔ A ∪ B = B. A finite set A is a subset of B if and only if the cardinality of their intersection is equal to the cardinality of A. Formally: A ⊆ B ⇔ | A ∩ B | = | A |; some authors use the symbols ⊃ to indicate subset and superset respectively. So for example, for these authors, it is true of every set A that A ⊂ A. Other authors prefer to use the symbols ⊂ and ⊃ to indicate proper subset and proper superset respectively; this usage makes ⊆ and ⊂ analogous to the inequality symbols ≤ and <. For example, if x ≤ y x may or may not equal y, but if x < y x does not equal y, is less than y. Using the convention that ⊂ is proper subset, if A ⊆ B A may or may not equal B, but if A ⊂ B A does not equal B; the set A = is a proper subset of B =, thus both expressions A ⊆ B and A ⊊ B are true. The set D = is a subset of E =, thus D ⊆ E is true, D ⊊ E is not true. Any set is a subset of itself, but not a proper subset; the empty set, denoted by ∅, is a subset of any given set X.
It is always a proper subset of any set except itself. The set is a proper subset of The set of natural numbers is a proper subset of the set of rational numbers; these are two examples in which both the subset and the whole set are infinite, the subset has the same cardinality as the whole. The set of rational numbers is a proper subset of the set of real numbers. In this example, both sets are infinite but the latter set has a larger cardinality than the former set. Another example in an Euler diagram: Inclusion is the canonical partial order in the sense that every ordered set is isomorphic to some collection of sets ordered by inclusion; the ordinal numbers are a simple example—if each ordinal n is identified with the set of all ordinals less than or equal to n a ≤ b if and only if ⊆. For the power set P of a set S, the inclusion partial order is the Cartesian product of k = |S| copies of the partial order on for which 0 < 1. This can be illustrated by enumerating S = and associating with each subset T ⊆ S the k-tuple from k of which the ith coordinate is 1 if and only if si is a member of T.
Containment order Jech, Thomas. Set Theory. Springer-Verlag. ISBN 3-540-44085-2. Media related to Subsets at Wikimedia Commons Weisstein, Eric W. "Subset". MathWorld
Material conditional
The material conditional is a logical connective, symbolized by a forward arrow "→". The material conditional is used to form statements of the form p → q, read as "if p q". Unlike the English construction "if... then...", the material conditional statement p → q does not conventionally specify a causal relationship between p and q. It means "if p is true q is true" such that the statement p → q is false only when both p is true and q is false. In a bivalent truth table of p → q, if p is false p → q is true regardless of whether q is true or false since p → q is always true as long q is true and p → q is true when both p and q are false; this truth table is useful to prove some mathematical theorems. The material conditional is symbolized using: ⊃. Conditional statements may be nested such that either or both of the antecedent or the consequent may themselves be conditional statements. In the example " →", meaning "if the truth of p implies the truth of q the truth of r implies the truth of s), both the antecedent and the consequent are conditional statements.
In classical logic p → q is logically equivalent to ¬ and by De Morgan's Law logically equivalent to ¬ p ∨ q. Whereas, in minimal logic p → q only logically entails ¬. Logicians have many different views on the nature of material implication and approaches to explain its sense; the compound p → q is false if and only if p is q is false. By the same stroke, p → q is true if and only if either p is q is true; the → symbol is a function that uses pairs of truth values of the components p, q and maps it to the truth values of the compound p→q. The truth value of p→q is a function of the truth values of its components. Hence, this interpretation is called truth-functional; the compound p→q is logically equivalent to ¬p∨q, to ¬q→¬p. It is, not equivalent to ¬p→¬q, instead equivalent to q→p; the truth table associated with the material conditional p→q is identical to that of ¬p∨q. It is as follows: It may be useful to note that in Boolean algebra and false can be denoted as 1 and 0 with an equivalent table.
The material conditional can be considered as a symbol of a formal theory, taken as a set of sentences, satisfying all the classical inferences involving →, in particular the following characteristic rules: Modus ponens. Unlike the truth-functional one, this approach to logical connectives permits the examination of structurally identical propositional forms in various logical systems, where somewhat different properties may be demonstrated. For example, in intuitionistic logic which rejects proofs by contraposition as valid rules of inference, ⇒ ¬p ∨ q is not a propositional theorem, but the material conditional is used to define negation; when studying logic formally, the material conditional is distinguished from the semantic consequence relation ⊨. We say A ⊨ B if every interpretation that makes A true makes B true. However, there is a close relationship between the two in most logics, including classical logic. For example, the following principles hold: If Γ ⊨ ψ ∅ ⊨ for some φ 1, …, φ n ∈ Γ.
The converse of the above Both → and ⊨ are monotonic.
Intransitivity
In mathematics, intransitivity is a property of binary relations that are not transitive relations. This may include any relation, not transitive, or the stronger property of antitransitivity, which describes a relation, never transitive. A relation is transitive if, whenever it relates some A to some B, that B to some C, it relates that A to that C; some authors call a relation intransitive if it is not transitive, i.e. ¬. This statement is equivalent to ∃ a, b, c: a R b ∧ b R c ∧ ¬ For instance, in the food chain, wolves feed on deer, deer feed on grass, but wolves do not feed on grass. Thus, the feed on relation among life forms is intransitive, in this sense. Another example that does not involve preference loops arises in freemasonry: in some instances lodge A recognizes lodge B, lodge B recognizes lodge C, but lodge A does not recognize lodge C, thus the recognition relation among Masonic lodges is intransitive. The term intransitive is used to refer to the stronger property of antitransitivity.
We just saw that the feed on relation is not transitive, but it still contains some transitivity: for instance: humans feed on rabbits, rabbits feed on carrots, humans feed on carrots. A relation is antitransitive if this never occurs at all, i.e. ∀ a, b, c: a R b ∧ b R c ⇒ ¬ Many authors use the term intransitivity to mean antitransitivity. An example of an antitransitive relation: the defeated relation in knockout tournaments. If player A defeated player B and player B defeated player C, A can have never played C, therefore, A has not defeated C. By transposition, each of the following formulas is equivalent to antitransitivity of R: ∀ a, b, c: a R b ∧ a R c ⇒ ¬ ∀ a, b, c: a R c ∧ b R c ⇒ ¬ An antitransitive relation is always irreflexive. An irreflexive and left- unique relation is always anti-transitive; the term intransitivity is used when speaking of scenarios in which a relation describes the relative preferences between pairs of options, weighing several options produces a "loop" of preference: A is preferred to B B is preferred to C C is preferred to ARock, scissors.
Real combative relations of competing species, strategies of individual animals, fights of remote-controlled vehicles in BattleBots shows can be cyclic as well. Assuming no option is preferred to itself i.e. the relation is irreflexive, a preference relation with a loop is not transitive. For if it is, each option in the loop is preferred to each option, including itself; this can be illustrated for this example of a loop among A, B, C. Assume the relation is transitive. Since A is preferred to B and B is preferred to C A is preferred to C, but since C is preferred to A A is preferred to A. Therefore such a preference loop is known as an intransitivity. Notice that a cycle is neither necessary nor sufficient for a binary relation to be not transitive. For example, an equivalence relation is transitive. Now, consider the relation "is an enemy of" and suppose that the relation is symmetric and satisfies the condition that for any country, any enemy of an enemy of the country is not itself an enemy of the country.
This is an example of an antitransitive relation. In particular, by virtue of being antitransitive the relation is not transitive; the game of rock, scissors is an example. The relation over rock and scissors is "defeats", the standard rules of the game are such that rock defeats scissors, scissors defeats paper, paper defeats rock. Furthermore, it is true that scissors does not defeat rock, paper does not defeat scissors, rock does not defeat paper, it is true that no option defeats itself. This information can be depicted in a table: The first argument of the relation is a row and the second one is a column. Ones indicate the relation holds, zero indicates. No
Nontransitive dice
A set of dice is nontransitive if it contains three dice, A, B, C, with the property that A rolls higher than B more than half the time, B rolls higher than C more than half the time, but it is not true that A rolls higher than C more than half the time. In other words, a set of dice is nontransitive if the binary relation – X rolls a higher number than Y more than half the time – on its elements is not transitive, it is possible to find sets of dice with the stronger property that, for each dice in the set, there is another die that rolls a higher number than it more than half the time. Using such a set of dice, one can invent games which are biased in ways that people unused to nontransitive dice might not expect. Consider the following set of dice. Die A has sides 2, 2, 4, 4, 9, 9. Die B has sides 1, 1, 6, 6, 8, 8. Die C has sides 3, 3, 5, 5, 7, 7; the probability that A rolls a higher number than B, the probability that B rolls higher than C, the probability that C rolls higher than A are all 5/9, so this set of dice is nontransitive.
In fact, it has the stronger property that, for each die in the set, there is another die that rolls a higher number than it more than half the time. Now, consider the following game, played with a set of dice; the first player chooses a die from the set. The second player chooses. Both players roll their die. If this game is played with a transitive set of dice, it is either fair or biased in favor of the first player, because the first player can always find a die that will not be beaten by any other dice more than half the time. If it is played with the set of dice described above, the game is biased in favor of the second player, because the second player can always find a die that will beat the first player's die with probability 5/9; the following tables show all possible outcomes for all 3 pairs of dice. Though the three nontransitive dice A, B, C A: 2, 2, 6, 6, 7, 7 B: 1, 1, 5, 5, 9, 9 C: 3, 3, 4, 4, 8, 8P = P = P = 5/9 and the three nontransitive dice A′, B′, C′ A′: 2, 2, 4, 4, 9, 9 B′: 1, 1, 6, 6, 8, 8 C′: 3, 3, 5, 5, 7, 7P = P = P = 5/9 win against each other with equal probability they are not equivalent.
While the first set of dice has a ` highest' die, the second set of dice has a ` lowest'. Rolling the three dice of a set and using always the highest score for evaluation will show a different winning pattern for the two sets of dice. With the first set of dice, die B will win with the highest probability and dice A and C will each win with a probability of 64/216. With the second set of dice, die C′ will win with the lowest probability and dice A′ and B′ will each win with a probability of 80/216. Efron's dice are a set of four nontransitive dice invented by Bradley Efron; the four dice A, B, C, D have the following numbers on their six faces: A: 4, 4, 4, 4, 0, 0 B: 3, 3, 3, 3, 3, 3 C: 6, 6, 2, 2, 2, 2 D: 5, 5, 5, 1, 1, 1 Each die is beaten by the previous die in the list, with a probability of 2/3: P = P = P = P = 2 3 B's value is constant. B beats C with a 2/3 probability because only two of C's faces are higher. P can be calculated by summing conditional probabilities for two events: C rolls 6.
The probability of die A beating C is 4/9. So the likelihood of A beating any other randomly selected die is: 1 3 × = 13 27 Similarly, die B beats C two-thirds of the time but beats A only one-third of the time; the probability of die B beating D is 1/2. So the likelihood of B beating any other randomly selected die is: 1 3 × = 1 2 {\displaystyle \t
Stirling numbers of the second kind
In mathematics in combinatorics, a Stirling number of the second kind is the number of ways to partition a set of n objects into k non-empty subsets and is denoted by S or. Stirling numbers of the second kind occur in the field of mathematics called combinatorics and the study of partitions. Stirling numbers of the second kind are one of two kinds of Stirling numbers, the other kind being called Stirling numbers of the first kind. Mutually inverse triangular matrices can be formed from the Stirling numbers of each kind according to the parameters n, k; the Stirling numbers of the second kind, written S or or with other notations, count the number of ways to partition a set of n labelled objects into k nonempty unlabelled subsets. Equivalently, they count the number of different equivalence relations with k equivalence classes that can be defined on an n element set. In fact, there is a bijection between the set of partitions and the set of equivalence relations on a given set. = 1 and for n ≥ 1, = 1 as the only way to partition an n-element set into n parts is to put each element of the set into its own part, the only way to partition a nonempty set into one part is to put all of the elements in the same part.
They can be calculated using the following explicit formula: = 1 k! ∑ i = 0 k i n. The Stirling numbers of the second kind may be characterized as the numbers that arise when one expresses powers of an indeterminate x in terms of the falling factorials n = x ⋯. In particular, one has ∑ k = 0 n k = x n. Various notations have been used for Stirling numbers of the second kind; the brace notation was used by Imanuel Marx and Antonio Salmeri in 1962 for variants of these numbers. This led Knuth to use it. However, according to the third edition of The Art of Computer Programming, this notation was used earlier by Jovan Karamata in 1935; the notation S was used by Richard Stanley in his book Enumerative Combinatorics. Since the Stirling number counts set partitions of an n-element set into k parts, the sum B n = ∑ k = 0 n over all values of k is the total number of partitions of a set with n members; this number is known as the nth Bell number. Analogously, the ordered Bell numbers can be computed from the Stirling numbers of the second kind via a n = ∑ k = 0 n k!.
Below is a triangular array of values for the Stirling numbers of the second kind: As with the binomial coefficients, this table could be extended to k > n, but those entries would all be 0. Stirling numbers of the second kind obey the recurrence relation = k + for k > 0 with initial conditions = 1 and { n 0