1.
Glyph
–
In typography, a glyph /ˈɡlɪf/ is an elemental symbol within an agreed set of symbols, intended to represent a readable character for the purposes of writing. In Turkish, however, it is not a glyph because that language has two versions of the letter i, with and without a dot. In Japanese syllabaries, a number of the characters are made up of more than one separate mark, however, in some cases, additional marks fulfill the role of diacritics, to differentiate distinct characters. In general, a diacritic is a glyph, even if it is contiguous with the rest of the character, two or more glyphs which have the same significance, whether used interchangeably or chosen depending on context, are called allographs of each other. The term has been used in English since 1727, borrowed from glyphe, from the Greek γλυφή, glyphē, carving, and the verb γλύφειν, glýphein, to hollow out, engrave, carve. The word glyph first came to widespread European attention with the engravings, in archaeology, a glyph is a carved or inscribed symbol. It may be a pictogram or ideogram, or part of a system such as a syllable. In 1897 Dana Evans discovered glyphs written on rocks in the Colorado Desert and these ancient characters have been called the most enlightening discovery in Native American History in the 19th Century. In typography, a glyph has a different definition, it is the specific shape, design. The same is true in computing, in computing as well as typography, the term character refers to a grapheme or grapheme-like unit of text, as found in natural language writing systems. The range of glyphs required increases correspondingly, in summary, in typography and computing, a glyph is a graphical unit. In graphonomics, the glyph is used for a noncharacter. Most typographic glyphs originate from the characters of a typeface, in the mobile text input technologies, Glyph is a family of text input methods based on the decomposition of letters into basic shapes. In role-playing games, the glyph is sometimes used alongside the word rune in describing magical drawings or etchings. Runes often refer to placing the image on an object or person to empower it, whereas the magic in a glyph lies dormant and is only triggered when the glyph is read or approached
2.
Greek alphabet
–
It is the ancestor of the Latin and Cyrillic scripts. In its classical and modern forms, the alphabet has 24 letters, Modern and Ancient Greek use different diacritics. In standard Modern Greek spelling, orthography has been simplified to the monotonic system, examples In both Ancient and Modern Greek, the letters of the Greek alphabet have fairly stable and consistent symbol-to-sound mappings, making pronunciation of words largely predictable. Ancient Greek spelling was generally near-phonemic, among consonant letters, all letters that denoted voiced plosive consonants and aspirated plosives in Ancient Greek stand for corresponding fricative sounds in Modern Greek. This leads to groups of vowel letters denoting identical sounds today. Modern Greek orthography remains true to the spellings in most of these cases. The following vowel letters and digraphs are involved in the mergers, Modern Greek speakers typically use the same, modern, in other countries, students of Ancient Greek may use a variety of conventional approximations of the historical sound system in pronouncing Ancient Greek. Several letter combinations have special conventional sound values different from those of their single components, among them are several digraphs of vowel letters that formerly represented diphthongs but are now monophthongized. In addition to the three mentioned above, there is also ⟨ου⟩, pronounced /u/, the Ancient Greek diphthongs ⟨αυ⟩, ⟨ευ⟩ and ⟨ηυ⟩ are pronounced, and respectively in voicing environments in Modern Greek. The Modern Greek consonant combinations ⟨μπ⟩ and ⟨ντ⟩ stand for and respectively, ⟨τζ⟩ stands for, in addition, both in Ancient and Modern Greek, the letter ⟨γ⟩, before another velar consonant, stands for the velar nasal, thus ⟨γγ⟩ and ⟨γκ⟩ are pronounced like English ⟨ng⟩. There are also the combinations ⟨γχ⟩ and ⟨γξ⟩ and these signs were originally designed to mark different forms of the phonological pitch accent in Ancient Greek. The letter rho, although not a vowel, also carries a rough breathing in word-initial position, if a rho was geminated within a word, the first ρ always had the smooth breathing and the second the rough breathing leading to the transiliteration rrh. The vowel letters ⟨α, η, ω⟩ carry an additional diacritic in certain words, the iota subscript. This iota represents the former offglide of what were originally long diphthongs, ⟨ᾱι, ηι, ωι⟩, another diacritic used in Greek is the diaeresis, indicating a hiatus. In 1982, a new, simplified orthography, known as monotonic, was adopted for use in Modern Greek by the Greek state. Although it is not a diacritic, the comma has a function as a silent letter in a handful of Greek words, principally distinguishing ό. There are many different methods of rendering Greek text or Greek names in the Latin script, the form in which classical Greek names are conventionally rendered in English goes back to the way Greek loanwords were incorporated into Latin in antiquity. In this system, ⟨κ⟩ is replaced with ⟨c⟩, the diphthongs ⟨αι⟩ and ⟨οι⟩ are rendered as ⟨ae⟩ and ⟨oe⟩ respectively, and ⟨ει⟩ and ⟨ου⟩ are simplified to ⟨i⟩ and ⟨u⟩ respectively
3.
Cyrillic script
–
The Cyrillic script /sᵻˈrɪlɪk/ is a writing system used for various alphabets across eastern Europe and north and central Asia. It is based on the Early Cyrillic, which was developed in the First Bulgarian Empire during the 9th century AD at the Preslav Literary School. As of 2011, around 252 million people in Eurasia use it as the alphabet for their national languages. With the accession of Bulgaria to the European Union on 1 January 2007, Cyrillic became the official script of the European Union, following the Latin script. Cyrillic is derived from the Greek uncial script, augmented by letters from the older Glagolitic alphabet and these additional letters were used for Old Church Slavonic sounds not found in Greek. The script is named in honor of the two Byzantine brothers, Saints Cyril and Methodius, who created the Glagolitic alphabet earlier on, modern scholars believe that Cyrillic was developed and formalized by early disciples of Cyril and Methodius. In the early 18th century the Cyrillic script used in Russia was heavily reformed by Peter the Great, the new form of letters became closer to the Latin alphabet, several archaic letters were removed and several letters were personally designed by Peter the Great. West European typography culture was also adopted, Cyrillic script spread throughout the East and South Slavic territories, being adopted for writing local languages, such as Old East Slavic. Its adaptation to local languages produced a number of Cyrillic alphabets, capital and lowercase letters were not distinguished in old manuscripts. Yeri was originally a ligature of Yer and I, iotation was indicated by ligatures formed with the letter І, Ꙗ, Ѥ, Ю, Ѩ, Ѭ. Sometimes different letters were used interchangeably, for example И = І = Ї, there were also commonly used ligatures like ѠТ = Ѿ. The letters also had values, based not on Cyrillic alphabetical order. The early Cyrillic alphabet is difficult to represent on computers, many of the letterforms differed from modern Cyrillic, varied a great deal in manuscripts, and changed over time. Few fonts include adequate glyphs to reproduce the alphabet, the Unicode 5.1 standard, released on 4 April 2008, greatly improves computer support for the early Cyrillic and the modern Church Slavonic language. In Microsoft Windows, Segoe UI is notable for having complete support for the archaic Cyrillic letters since Windows 8, the development of Cyrillic typography passed directly from the medieval stage to the late Baroque, without a Renaissance phase as in Western Europe. Late Medieval Cyrillic letters show a tendency to be very tall and narrow. Peter the Great, Czar of Russia, mandated the use of westernized letter forms in the early 18th century, over time, these were largely adopted in the other languages that use the script. The development of some Cyrillic computer typefaces from Latin ones has also contributed to the visual Latinization of Cyrillic type, Cyrillic uppercase and lowercase letter forms are not as differentiated as in Latin typography
4.
Statistics
–
Statistics is a branch of mathematics dealing with the collection, analysis, interpretation, presentation, and organization of data. In applying statistics to, e. g. a scientific, industrial, or social problem, populations can be diverse topics such as all people living in a country or every atom composing a crystal. Statistics deals with all aspects of data including the planning of data collection in terms of the design of surveys, statistician Sir Arthur Lyon Bowley defines statistics as Numerical statements of facts in any department of inquiry placed in relation to each other. When census data cannot be collected, statisticians collect data by developing specific experiment designs, representative sampling assures that inferences and conclusions can safely extend from the sample to the population as a whole. In contrast, an observational study does not involve experimental manipulation, inferences on mathematical statistics are made under the framework of probability theory, which deals with the analysis of random phenomena. A standard statistical procedure involves the test of the relationship between two data sets, or a data set and a synthetic data drawn from idealized model. A hypothesis is proposed for the relationship between the two data sets, and this is compared as an alternative to an idealized null hypothesis of no relationship between two data sets. Rejecting or disproving the hypothesis is done using statistical tests that quantify the sense in which the null can be proven false. Working from a hypothesis, two basic forms of error are recognized, Type I errors and Type II errors. Multiple problems have come to be associated with this framework, ranging from obtaining a sufficient sample size to specifying an adequate null hypothesis, measurement processes that generate statistical data are also subject to error. Many of these errors are classified as random or systematic, the presence of missing data or censoring may result in biased estimates and specific techniques have been developed to address these problems. Statistics continues to be an area of research, for example on the problem of how to analyze Big data. Statistics is a body of science that pertains to the collection, analysis, interpretation or explanation. Some consider statistics to be a mathematical science rather than a branch of mathematics. While many scientific investigations make use of data, statistics is concerned with the use of data in the context of uncertainty, mathematical techniques used for this include mathematical analysis, linear algebra, stochastic analysis, differential equations, and measure-theoretic probability theory. In applying statistics to a problem, it is practice to start with a population or process to be studied. Populations can be diverse topics such as all living in a country or every atom composing a crystal. Ideally, statisticians compile data about the entire population and this may be organized by governmental statistical institutes
5.
Probability theory
–
Probability theory is the branch of mathematics concerned with probability, the analysis of random phenomena. It is not possible to predict precisely results of random events, two representative mathematical results describing such patterns are the law of large numbers and the central limit theorem. As a mathematical foundation for statistics, probability theory is essential to human activities that involve quantitative analysis of large sets of data. Methods of probability theory also apply to descriptions of complex systems given only partial knowledge of their state, a great discovery of twentieth century physics was the probabilistic nature of physical phenomena at atomic scales, described in quantum mechanics. Christiaan Huygens published a book on the subject in 1657 and in the 19th century, initially, probability theory mainly considered discrete events, and its methods were mainly combinatorial. Eventually, analytical considerations compelled the incorporation of continuous variables into the theory and this culminated in modern probability theory, on foundations laid by Andrey Nikolaevich Kolmogorov. Kolmogorov combined the notion of space, introduced by Richard von Mises. This became the mostly undisputed axiomatic basis for modern probability theory, most introductions to probability theory treat discrete probability distributions and continuous probability distributions separately. The more mathematically advanced measure theory-based treatment of probability covers the discrete, continuous, consider an experiment that can produce a number of outcomes. The set of all outcomes is called the space of the experiment. The power set of the space is formed by considering all different collections of possible results. For example, rolling an honest die produces one of six possible results, one collection of possible results corresponds to getting an odd number. Thus, the subset is an element of the set of the sample space of die rolls. In this case, is the event that the die falls on some odd number, If the results that actually occur fall in a given event, that event is said to have occurred. Probability is a way of assigning every event a value between zero and one, with the requirement that the event made up of all possible results be assigned a value of one, the probability that any one of the events, or will occur is 5/6. This is the same as saying that the probability of event is 5/6 and this event encompasses the possibility of any number except five being rolled. The mutually exclusive event has a probability of 1/6, and the event has a probability of 1, discrete probability theory deals with events that occur in countable sample spaces. Modern definition, The modern definition starts with a finite or countable set called the sample space, which relates to the set of all possible outcomes in classical sense, denoted by Ω
6.
Sample space
–
In probability theory, the sample space of an experiment or random trial is the set of all possible outcomes or results of that experiment. A sample space is denoted using set notation, and the possible outcomes are listed as elements in the set. It is common to refer to a space by the labels S, Ω. For example, if the experiment is tossing a coin, the space is typically the set. For tossing two coins, the sample space would be. For tossing a single six-sided die, the sample space is. A well-defined sample space is one of three elements in a probabilistic model, the other two are a well-defined set of possible events and a probability assigned to each event. For many experiments, there may be more than one plausible sample space available, for example, when drawing a card from a standard deck of fifty-two playing cards, one possibility for the sample space could be the various ranks, while another could be the suits. Still other sample spaces are possible, such as if some cards have been flipped when shuffling, some treatments of probability assume that the various outcomes of an experiment are always defined so as to be equally likely. The result of this is every possible combination of individuals who could be chosen for the sample is also equally likely. In an elementary approach to probability, any subset of the space is usually called an event. However, this rise to problems when the sample space is infinite. Under this definition only measurable subsets of the space, constituting a σ-algebra over the sample space itself, are considered events. Probability space Space Set Event σ-algebra
7.
Event (probability theory)
–
In probability theory, an event is a set of outcomes of an experiment to which a probability is assigned. A single outcome may be an element of different events. An event defines an event, namely the complementary set. Typically, when the space is finite, any subset of the sample space is an event. However, this approach does not work well in cases where the space is uncountably infinite. So, when defining a probability space it is possible, and often necessary, to exclude certain subsets of the sample space from being events. If we assemble a deck of 52 playing cards with no jokers, an event, however, is any subset of the sample space, including any singleton set, the empty set and the sample space itself. Other events are subsets of the sample space that contain multiple elements. So, for example, potential events include, Red and black at the time without being a joker, The 5 of Hearts, A King, A Face card, A Spade, A Face card or a red suit. Since all events are sets, they are written as sets. Defining all subsets of the space as events works well when there are only finitely many outcomes. For many standard probability distributions, such as the normal distribution, attempts to define probabilities for all subsets of the real numbers run into difficulties when one considers badly behaved sets, such as those that are nonmeasurable. Hence, it is necessary to restrict attention to a limited family of subsets. The most natural choice is the Borel measurable set derived from unions and intersections of intervals, however, the larger class of Lebesgue measurable sets proves more useful in practice. In the general description of probability spaces, an event may be defined as an element of a selected σ-algebra of subsets of the sample space. Under this definition, any subset of the space that is not an element of the σ-algebra is not an event. With a reasonable specification of the probability space, however, all events of interest are elements of the σ-algebra, even though events are subsets of some sample space Ω, they are often written as propositional formulas involving random variables. For example, if X is a random variable defined on the sample space Ω
8.
Random variable
–
In probability and statistics, a random variable, random quantity, aleatory variable, or stochastic variable is a variable quantity whose value depends on possible outcomes. It is common that these outcomes depend on physical variables that are not well understood. For example, when you toss a coin, the outcome of heads or tails depends on the uncertain physics. Which outcome will be observed is not certain, of course the coin could get caught in a crack in the floor, but such a possibility is excluded from consideration. The domain of a variable is the set of possible outcomes. In the case of the coin, there are two possible outcomes, namely heads or tails. Since one of these outcomes must occur, thus either the event that the coin lands heads or the event that the coin lands tails must have non-zero probability, a random variable is defined as a function that maps outcomes to numerical quantities, typically real numbers. In this sense, it is a procedure for assigning a numerical quantity to each outcome, and, contrary to its name. What is random is the physics that describes how the coin lands. A random variables possible values might represent the possible outcomes of a yet-to-be-performed experiment and they may also conceptually represent either the results of an objectively random process or the subjective randomness that results from incomplete knowledge of a quantity. The mathematics works the same regardless of the interpretation in use. A random variable has a probability distribution, which specifies the probability that its value falls in any given interval, two random variables with the same probability distribution can still differ in terms of their associations with, or independence from, other random variables. The realizations of a variable, that is, the results of randomly choosing values according to the variables probability distribution function, are called random variates. The formal mathematical treatment of random variables is a topic in probability theory, in that context, a random variable is understood as a function defined on a sample space whose outputs are numerical values. A random variable X, Ω → E is a function from a set of possible outcomes Ω to a measurable space E. The technical axiomatic definition requires Ω to be a probability space, a random variable does not return a probability. The probability of a set of outcomes is given by the probability measure P with which Ω is equipped. Rather, X returns a numerical quantity of outcomes in Ω — e. g. the number of heads in a collection of coin flips
9.
Probability measure
–
In mathematics, a probability measure is a real-valued function defined on a set of events in a probability space that satisfies measure properties such as countable additivity. The difference between a probability measure and the general notion of measure is that a probability measure must assign value 1 to the entire probability space. Probability measures have applications in fields, from physics to finance. The requirements for a function μ to be a probability measure on a probability space are that, μ must return results in the unit interval, μ must satisfy the countable additivity property that for all countable collections of pairwise disjoint sets, μ = ∑ i ∈ I μ. For example, given three elements 1,2 and 3 with probabilities 1/4, 1/4 and 1/2, the assigned to is 1/4 + 1/2 = 3/4. The conditional probability based on the intersection of events defined as, if there is a unique probability measure that must be used to price assets in a market, then the market is called a complete market. Not all measures that intuitively represent chance or likelihood are probability measures, for instance, although the fundamental concept of a system in statistical mechanics is a measure space, such measures are not always probability measures. Probability measures are used in mathematical biology. For instance, in comparative sequence analysis a probability measure may be defined for the likelihood that a variant may be permissible for an acid in a sequence. Ash, Catherine A. Doléans-Dade 1999 Academic Press ISBN 0-12-065202-1
10.
Joint probability distribution
–
In the case of only two random variables, this is called a bivariate distribution, but the concept generalizes to any number of random variables, giving a multivariate distribution. The joint probability distribution can be expressed either in terms of a joint cumulative distribution function or in terms of a joint probability density function or joint probability mass function. Consider the flip of two coins, let A and B be discrete random variables associated with the outcomes first. If a coin displays heads then associated random variable is 1, the joint probability density function of A and B defines probabilities for each pair of outcomes. All possible outcomes are, Since each outcome is likely the joint probability density function becomes P =1 /4 when A, B ∈. Since the coin flips are independent, the joint probability density function is the product of the marginals, in general, each coin flip is a Bernoulli trial and the sequence of flips follows a Bernoulli distribution. Consider the roll of a dice and let A =1 if the number is even. Furthermore, let B =1 if the number is prime and B =0 otherwise. Then, the joint distribution of A and B, expressed as a probability function, is P = P =16, P = P =26, P = P =26, P = P =16. These probabilities necessarily sum to 1, since the probability of some combination of A and B occurring is 1. The joint probability function of two discrete random variables X, Y is, P = P ⋅ P = P ⋅ P. Again, since these are probability distributions, one has ∫ x ∫ y f X, Y d y d x =1, formally, fX, Y is the probability density function of with respect to the product measure on the respective supports of X and Y. Two discrete random variables X and Y are independent if the joint probability mass function satisfies P = P ⋅ P for all x and y. Similarly, two absolutely continuous random variables are independent if f X, Y = f X ⋅ f Y for all x and y, such conditional independence relations can be represented with a Bayesian network or copula functions
11.
Marginal distribution
–
In probability theory and statistics, the marginal distribution of a subset of a collection of random variables is the probability distribution of the variables contained in the subset. It gives the probabilities of various values of the variables in the subset without reference to the values of the other variables and this contrasts with a conditional distribution, which gives the probabilities contingent upon the values of the other variables. The term marginal variable is used to refer to those variables in the subset of variables being retained and these terms are dubbed marginal because they used to be found by summing values in a table along rows or columns, and writing the sum in the margins of the table. The distribution of the variables is obtained by marginalizing over the distribution of the variables being discarded. Several different analyses may be done, each treating a different subset of variables as the marginal variables, given two random variables X and Y whose joint distribution is known, the marginal distribution of X is simply the probability distribution of X averaging over information about Y. It is the probability distribution of X when the value of Y is not known and this is typically calculated by summing or integrating the joint probability distribution over Y. For discrete random variables, the probability mass function can be written as Pr. This is Pr = ∑ y Pr = ∑ y Pr Pr, in this case, the variable Y has been marginalized out. Bivariate marginal and joint probabilities for discrete variables are often displayed as two-way tables. Similarly for continuous variables, the marginal probability density function can be written as pX. Again, the variable Y has been marginalized out and this follows from the definition of expected value, E Y = ∫ y f p Y d y. Let H be a random variable taking one value from. Let L be a random variable taking one value from. Realistically, H will be dependent on L and that is, P and P will take different values depending on whether L is red, yellow or green. A person is, for example, far more likely to be hit by a car trying to cross while the lights for cross traffic are green than if they are red. In general, a pedestrian can be hit if the lights are red OR if the lights are yellow OR if the lights are green. So, in case the answer for the marginal probability can be found by summing P for all possible values of L. Here is a showing the conditional probabilities of being hit
12.
Conditional probability
–
In probability theory, conditional probability is a measure of the probability of an event given that another event has occurred. For example, the probability that any person has a cough on any given day may be only 5%. But if we know or assume that the person has a cold, the conditional probability of coughing given that you have a cold might be a much higher 75%. The concept of probability is one of the most fundamental. But conditional probabilities can be slippery and require careful interpretation. For example, there need not be a causal or temporal relationship between A and B, P may or may not be equal to P. If P = P, then events A and B are said to be independent, in such a case, also, in general, P is not equal to P. For example, if you have cancer you might have a 90% chance of testing positive for cancer. In this case what is being measured is that the if event B having cancer has occurred, alternatively, you can test positive for cancer but you may have only a 10% chance of actually having cancer because cancer is very rare. In this case what is being measured is the probability of the event B - having cancer given that the event A - test is positive has occurred, falsely equating the two probabilities causes various errors of reasoning such as the base rate fallacy. Conditional probabilities can be reversed using Bayes theorem. The logic behind this equation is that if the outcomes are restricted to B, Note that this is a definition but not a theoretical result. We just denote the quantity P / P as P and call it the conditional probability of A given B. Further, this multiplication axiom introduces a symmetry with the axiom for mutually exclusive events, P = P + P − P0 If P =0. However, it is possible to define a probability with respect to a σ-algebra of such events. The case where B has zero measure is problematic, see conditional expectation for more information. Conditioning on an event may be generalized to conditioning on a random variable, Let X be a random variable, we assume for the sake of presentation that X is discrete, that is, X takes on only finitely many values x. The conditional probability of A given X is defined as the variable, written P
13.
Independence (probability theory)
–
In probability theory, two events are independent, statistically independent, or stochastically independent if the occurrence of one does not affect the probability of occurrence of other. Similarly, two variables are independent if the realization of one does not affect the probability distribution of the other. Two events A and B are independent if their joint probability equals the product of their probabilities, although the derived expressions may seem more intuitive, they are not the preferred definition, as the conditional probabilities may be undefined if P or P are 0. Furthermore, the preferred definition makes clear by symmetry that when A is independent of B, B is also independent of A. A finite set of events is independent if every pair of events is independent—that is, if. A finite set of events is independent if every event is independent of any intersection of the other events—that is, if and only if for every n-element subset. This is called the rule for independent events. Note that it is not a condition involving only the product of all the probabilities of all single events. For more than two events, an independent set of events is pairwise independent, but the converse is not necessarily true. Two random variables X and Y are independent if and only if the elements of the π-system generated by them are independent, that is to say, for every a and b, the events and are independent events. A set of variables is pairwise independent if and only if every pair of random variables is independent. A set of variables is mutually independent if and only if for any finite subset X1, …, X n and any finite sequence of numbers a 1, …, a n. The measure-theoretically inclined may prefer to substitute events for events in the above definition and that definition is exactly equivalent to the one above when the values of the random variables are real numbers. It has the advantage of working also for complex-valued random variables or for random variables taking values in any measurable space. Intuitively, two random variables X and Y are conditionally independent given Z if, once Z is known, for instance, two measurements X and Y of the same underlying quantity Z are not independent, but they are conditionally independent given Z. The formal definition of independence is based on the idea of conditional distributions. If X, Y, and Z are discrete random variables, if X and Y are conditionally independent given Z, then P = P for any x, y and z with P >0. That is, the distribution for X given Y and Z is the same as that given Z alone
14.
Conditional independence
–
In the standard notation of probability theory, R and B are conditionally independent given Y if and only if Pr = Pr Pr, or equivalently, Pr = Pr. Two random variables X and Y are conditionally independent given a random variable Z if. Two random variables X and Y are conditionally independent given a σ-algebra Σ if the equation holds for all R in σ and B in σ. Two random variables X and Y are conditionally independent given a random variable W if they are independent given σ, Conditional independence of more than two events, or of more than two random variables, is defined analogously. The following two examples show that X ⊥ Y neither implies nor is implied by X ⊥ Y | W. First, when W =0 take X and Y to be independent, each having the value 0 with probability 0.99 and the value 1 otherwise. When W =1, X and Y are again independent, but X and Y are dependent, because Pr < Pr. This is because Pr =0.5, but if Y =0 then its likely that W =0 and thus that X =0 as well. For the second example, suppose X ⊥ Y, each taking the values 0 and 1 with probability 0.5, let W be the product X×Y. Then when W =0, Pr = 2/3, but Pr = 1/2 and this is also an example of Explaining Away. See Kevin Murphys tutorial where X and Y take the values brainy, the discussion on StackExchange provides a couple of useful examples. Let the two events be the probabilities of persons A and B getting home in time for dinner, while both A and B have a lower probability of getting home in time for dinner, the lower probabilities will still be independent of each other. That is, the knowledge that A is late does not tell you whether B will be late. However, if you have information that they live in the neighborhood, use the same transportation. Conditional independence depends on the nature of the third event, if you roll two dice, one may assume that the two dice behave independently of each other. Looking at the results of 1 die will not tell you about the result of the second die, in other words, two events can be independent, but NOT conditionally independent. Height and vocabulary are not independent, but they are independent if you add age. Let p be the proportion of voters who will vote yes in an upcoming referendum, in taking an opinion poll, one chooses n voters randomly from the population. N, let Xi =1 or 0 corresponding, respectively, in a frequentist approach to statistical inference one would not attribute any probability distribution to p and one would say that X1
15.
Law of large numbers
–
In probability theory, the law of large numbers is a theorem that describes the result of performing the same experiment a large number of times. According to the law, the average of the results obtained from a number of trials should be close to the expected value. The LLN is important because it guarantees stable long-term results for the averages of some random events, for example, while a casino may lose money in a single spin of the roulette wheel, its earnings will tend towards a predictable percentage over a large number of spins. Any winning streak by a player will eventually be overcome by the parameters of the game and it is important to remember that the LLN only applies when a large number of observations is considered. There is no principle that a number of observations will coincide with the expected value or that a streak of one value will immediately be balanced by the others. For example, a roll of a fair, six-sided die produces one of the numbers 1,2,3,4,5, or 6. It follows from the law of numbers that the empirical probability of success in a series of Bernoulli trials will converge to the theoretical probability. For a Bernoulli random variable, the value is the theoretical probability of success. For example, a coin toss is a Bernoulli trial. When a fair coin is flipped once, the probability that the outcome will be heads is equal to 1/2. Therefore, according to the law of numbers, the proportion of heads in a large number of coin flips should be roughly 1/2. In particular, the proportion of heads after n flips will almost surely converge to 1/2 as n approaches infinity, though the proportion of heads approaches 1/2, almost surely the absolute difference in the number of heads and tails will become large as the number of flips becomes large. That is, the probability that the difference is a small number. Also, almost surely the ratio of the difference to the number of flips will approach zero. Intuitively, expected absolute difference grows, but at a slower rate than the number of flips, the Italian mathematician Gerolamo Cardano stated without proof that the accuracies of empirical statistics tend to improve with the number of trials. This was then formalized as a law of large numbers, a special form of the LLN was first proved by Jacob Bernoulli. It took him over 20 years to develop a rigorous mathematical proof which was published in his Ars Conjectandi in 1713. He named this his Golden Theorem but it became known as Bernoullis Theorem
16.
Bayes' theorem
–
In probability theory and statistics, Bayes’ theorem describes the probability of an event, based on prior knowledge of conditions that might be related to the event. One of the applications of Bayes’ theorem is Bayesian inference. When applied, the involved in Bayes’ theorem may have different probability interpretations. With the Bayesian probability interpretation the theorem expresses how a subjective degree of belief should rationally change to account for availability of related evidence, Bayesian inference is fundamental to Bayesian statistics. Bayes’ theorem is named after Rev. Thomas Bayes, who first provided an equation that allows new evidence to update beliefs. It was further developed by Pierre-Simon Laplace, who first published the modern formulation in his 1812 “Théorie analytique des probabilités. ”Sir Harold Jeffreys put Bayes’ algorithm and Laplaces formulation on an axiomatic basis. Jeffreys wrote that Bayes’ theorem “is to the theory of probability what the Pythagorean theorem is to geometry. ”Bayes theorem is stated mathematically as the equation, P = P P P. P and P are the probabilities of observing A and B without regard to each other, P, a conditional probability, is the probability of observing event A given that B is true. P is the probability of observing event B given that A is true, Bayes’ theorem was named after the Reverend Thomas Bayes, who studied how to compute a distribution for the probability parameter of a binomial distribution. Bayes’ unpublished manuscript was edited by Richard Price before it was posthumously read at the Royal Society. Price edited Bayes’ major work “An Essay towards solving a Problem in the Doctrine of Chances”, Price wrote an introduction to the paper which provides some of the philosophical basis of Bayesian statistics. In 1765 he was elected a Fellow of the Royal Society in recognition of his work on the legacy of Bayes, the French mathematician Pierre-Simon Laplace reproduced and extended Bayes’ results in 1774, apparently quite unaware of Bayes’ work. The Bayesian interpretation of probability was developed mainly by Laplace, stephen Stigler suggested in 1983 that Bayes’ theorem was discovered by Nicholas Saunderson, a blind English mathematician, some time before Bayes, that interpretation, however, has been disputed. Martyn Hooper and Sharon McGrayne have argued that Richard Prices contribution was substantial, By modern standards, Price discovered Bayes’ work, recognized its importance, corrected it, contributed to the article, and found a use for it. The modern convention of employing Bayes’ name alone is unfair but so entrenched that anything else makes little sense, suppose a drug test is 99% sensitive and 99% specific. That is, the test will produce 99% true positive results for drug users, suppose that 0. 5% of people are users of the drug. If a randomly selected individual tests positive, what is the probability that he is a user and this surprising result arises because the number of non-users is very large compared to the number of users, thus the number of false positives outweighs the number of true positives. To use concrete numbers, if 1000 individuals are tested, there are expected to be 995 non-users and 5 users, from the 995 non-users,0.01 ×995 ≃10 false positives are expected
17.
Tree diagram (probability theory)
–
In probability theory, a tree diagram may be used to represent a probability space. Tree diagrams may represent a series of independent events or conditional probabilities, each node on the diagram represents an event and is associated with the probability of that event. The root node represents the event and therefore has probability 1. Each set of sibling nodes represents an exclusive and exhaustive partition of the parent event, the probability associated with a node is the chance of that event occurring after the parent event occurs. The probability that the series of leading to a particular node will occur is equal to the product of that node. Decision tree Charles Henry Brase, Corrinne Pellillo Brase, Understanding Basic Statistics, cengage Learning,2012, ISBN9781133713890, pp. 205–208 tree diagrams - explanations and examples tree diagrams - examples and applications
18.
Diagram
–
A diagram is a symbolic representation of information according to some visualization technique. Diagrams have been used since ancient times, but became prevalent during the Enlightenment. Sometimes, the uses a three-dimensional visualization which is then projected onto a two-dimensional surface. The word graph is used as a synonym for diagram. Specific kind of display, This is the genre that shows qualitative data with shapes that are connected by lines, arrows. In science the term is used in both ways, on the other hand, Lowe defined diagrams as specifically abstract graphic portrayals of the subject matter they represent. Or in Halls words diagrams are simplified figures, caricatures in a way and these simplified figures are often based on a set of rules. The basic shape according to White can be characterized in terms of elegance, clarity, ease, pattern, simplicity, elegance is basically determined by whether or not the diagram is the simplest and most fitting solution to a problem. g. Many of these types of diagrams are generated using diagramming software such as Visio. Chart Diagrammatic reasoning Diagrammatology List of graphical methods Mathematical diagram Plot commons, michael Anderson, Peter Cheng, Volker Haarslev. Theory and Application of Diagrams, First International Conference, Diagrams 2000, edinburgh, Scotland, UK, September 1–3,2000. Garcia, M The Diagrams of Architecture
19.
Logic
–
Logic, originally meaning the word or what is spoken, is generally held to consist of the systematic study of the form of arguments. A valid argument is one where there is a relation of logical support between the assumptions of the argument and its conclusion. Historically, logic has been studied in philosophy and mathematics, and recently logic has been studied in science, linguistics, psychology. The concept of form is central to logic. The validity of an argument is determined by its logical form, traditional Aristotelian syllogistic logic and modern symbolic logic are examples of formal logic. Informal logic is the study of natural language arguments, the study of fallacies is an important branch of informal logic. Since much informal argument is not strictly speaking deductive, on some conceptions of logic, formal logic is the study of inference with purely formal content. An inference possesses a purely formal content if it can be expressed as an application of a wholly abstract rule, that is. The works of Aristotle contain the earliest known study of logic. Modern formal logic follows and expands on Aristotle, in many definitions of logic, logical inference and inference with purely formal content are the same. This does not render the notion of informal logic vacuous, because no formal logic captures all of the nuances of natural language, Symbolic logic is the study of symbolic abstractions that capture the formal features of logical inference. Symbolic logic is divided into two main branches, propositional logic and predicate logic. Mathematical logic is an extension of logic into other areas, in particular to the study of model theory, proof theory, set theory. Logic is generally considered formal when it analyzes and represents the form of any valid argument type, the form of an argument is displayed by representing its sentences in the formal grammar and symbolism of a logical language to make its content usable in formal inference. Simply put, formalising simply means translating English sentences into the language of logic and this is called showing the logical form of the argument. It is necessary because indicative sentences of ordinary language show a variety of form. Second, certain parts of the sentence must be replaced with schematic letters, thus, for example, the expression all Ps are Qs shows the logical form common to the sentences all men are mortals, all cats are carnivores, all Greeks are philosophers, and so on. The schema can further be condensed into the formula A, where the letter A indicates the judgement all - are -, the importance of form was recognised from ancient times
20.
Set (mathematics)
–
In mathematics, a set is a well-defined collection of distinct objects, considered as an object in its own right. For example, the numbers 2,4, and 6 are distinct objects when considered separately, Sets are one of the most fundamental concepts in mathematics. Developed at the end of the 19th century, set theory is now a part of mathematics. In mathematics education, elementary topics such as Venn diagrams are taught at a young age, the German word Menge, rendered as set in English, was coined by Bernard Bolzano in his work The Paradoxes of the Infinite. A set is a collection of distinct objects. The objects that make up a set can be anything, numbers, people, letters of the alphabet, other sets, Sets are conventionally denoted with capital letters. Sets A and B are equal if and only if they have precisely the same elements. Cantors definition turned out to be inadequate, instead, the notion of a set is taken as a notion in axiomatic set theory. There are two ways of describing, or specifying the members of, a set, one way is by intensional definition, using a rule or semantic description, A is the set whose members are the first four positive integers. B is the set of colors of the French flag, the second way is by extension – that is, listing each member of the set. An extensional definition is denoted by enclosing the list of members in curly brackets, one often has the choice of specifying a set either intensionally or extensionally. In the examples above, for instance, A = C and B = D, there are two important points to note about sets. First, in a definition, a set member can be listed two or more times, for example. However, per extensionality, two definitions of sets which differ only in one of the definitions lists set members multiple times, define, in fact. Hence, the set is identical to the set. The second important point is that the order in which the elements of a set are listed is irrelevant and we can illustrate these two important points with an example, = =. For sets with many elements, the enumeration of members can be abbreviated, for instance, the set of the first thousand positive integers may be specified extensionally as, where the ellipsis indicates that the list continues in the obvious way. Ellipses may also be used where sets have infinitely many members, thus the set of positive even numbers can be written as
21.
Element (mathematics)
–
In mathematics, an element, or member, of a set is any one of the distinct objects that make up that set. Writing A = means that the elements of the set A are the numbers 1,2,3 and 4, sets of elements of A, for example, are subsets of A. For example, consider the set B =, the elements of B are not 1,2,3, and 4. Rather, there are three elements of B, namely the numbers 1 and 2, and the set. The elements of a set can be anything, for example, C =, is the set whose elements are the colors red, green and blue. The relation is an element of, also called set membership, is denoted by the symbol ∈, writing x ∈ A means that x is an element of A. Equivalent expressions are x is a member of A, x belongs to A, x is in A and x lies in A, another possible notation for the same relation is A ∋ x, meaning A contains x, though it is used less often. The negation of set membership is denoted by the symbol ∉, writing x ∉ A means that x is not an element of A. The symbol ϵ was first used by Giuseppe Peano 1889 in his work Arithmetices principia nova methodo exposita, here he wrote on page X, Signum ϵ significat est. Ita a ϵ b legitur a est quoddam b. which means The symbol ϵ means is, so a ϵ b is read as a is a b. The symbol itself is a stylized lowercase Greek letter epsilon, the first letter of the word ἐστί, the Unicode characters for these symbols are U+2208, U+220B and U+2209. The equivalent LaTeX commands are \in, \ni and \notin, mathematica has commands \ and \. The number of elements in a set is a property known as cardinality, informally. In the above examples the cardinality of the set A is 4, an infinite set is a set with an infinite number of elements, while a finite set is a set with a finite number of elements. The above examples are examples of finite sets, an example of an infinite set is the set of positive integers =. Using the sets defined above, namely A =, B = and C =,2 ∈ A ∈ B3,4 ∉ B is a member of B Yellow ∉ C The cardinality of D = is finite, the cardinality of P = is infinite. Halmos, Paul R. Naive Set Theory, Undergraduate Texts in Mathematics, NY, Springer-Verlag, ISBN 0-387-90092-6 - Naive means that it is not fully axiomatized, not that it is silly or easy. Jech, Thomas, Set Theory, Stanford Encyclopedia of Philosophy Suppes, Patrick, Axiomatic Set Theory, NY, Dover Publications, Inc
22.
Euler diagram
–
An Euler diagram is a diagrammatic means of representing sets and their relationships. Typically they involve overlapping shapes, and may be scaled, such that the area of the shape is proportional to the number of elements it contains and they are particularly useful for explaining complex hierarchies and overlapping definitions. They are often confused with the Venn diagrams, unlike Venn diagrams which show all possible relations between different sets, the Euler diagram shows only relevant relationships. The first use of Eulerian circles is commonly attributed to Swiss mathematician Leonhard Euler, in the United States, both Venn and Euler diagrams were incorporated as part of instruction in set theory as part of the new math movement of the 1960s. Since then, they have also adopted by other curriculum fields such as reading as well as organizations. Euler diagrams consist of simple closed shapes in a two dimensional plane that depict a set or category. How or if these shapes overlap demonstrates the relationships between the sets, there are only 3 possible relationships between any 2 sets, completely inclusive, partially inclusive, and exclusive. This is also referred to as containment, overlap or neither or, especially in mathematics, it may be referred to as subset, intersection, curves whose interior zones do not intersect represent disjoint sets. Two curves whose interior zones intersect represent sets that have common elements, a curve that is contained completely within the interior zone of another represents a subset of it. Venn diagrams are a more form of Euler diagrams. A Venn diagram must contain all 2n logically possible zones of overlap between its n curves, representing all combinations of inclusion/exclusion of its constituent sets. Regions not part of the set are indicated by coloring them black, in contrast to Euler diagrams, when the number of sets grows beyond 3 a Venn diagram becomes visually complex, especially compared to the corresponding Euler diagram. The difference between Euler and Venn diagrams can be seen in the following example, the Venn diagram, which uses the same categories of Animal, Mineral, and Four Legs, does not encapsulate these relationships. Traditionally the emptiness of a set in Venn diagrams is depicted by shading in the region, Euler diagrams represent emptiness either by shading or by the absence of a region. Often a set of conditions are imposed, these are topological or geometric constraints imposed on the structure of the diagram. For example, connectedness of zones might be enforced, or concurrency of curves or multiple points might be banned, in the adjacent diagram, examples of small Venn diagrams are transformed into Euler diagrams by sequences of transformations, some of the intermediate diagrams have concurrency of curves. However, this sort of transformation of a Venn diagram with shading into an Euler diagram without shading is not always possible. There are examples of Euler diagrams with 9 sets that are not drawable using simple closed curves without the creation of unwanted zones since they would have to have non-planar dual graphs
23.
John Venn
–
John Venn, FRS, FSA, was an English logician and philosopher noted for introducing the Venn diagram, used in the fields of set theory, probability, logic, statistics, and computer science. John Venn was born on 4 August 1834 in Kingston upon Hull, Yorkshire to Martha Sykes and Rev. Henry Venn and his mother died when he was three years old. Venn was descended from a line of church evangelicals, including his grandfather John Venn. Venn was brought up in a very strict atmosphere at home. His father Henry had played a significant part in the Evangelical movement and he was also the secretary of the ‘Society for Missions to Africa and his grandfather was pastor to William Wilberforce of the abolitionist movement, in Clapham. He began his education in London joining Sir Roger Cholmeleys School, now known as Highgate School and he moved on to Islington proprietary school and in October 1853 he went to Gonville and Caius College, Cambridge. In 1857, he obtained his degree in mathematics and became a fellow, in 1903 he was elected President of the College, a post he held until his death. He would follow his vocation and become an Anglican priest, ordained in 1859, serving first at the church in Cheshunt, Hertfordshire. In 1862, he returned to Cambridge as a lecturer in science, studying and teaching logic and probability theory. These duties led to his developing the diagram which would bear his name. I began at once somewhat more work on the subjects. I now first hit upon the device of representing propositions by inclusive and exclusive circles. In 1868, he married Susanna Carnegie Edmonstone with whom he had one son, in 1883, he resigned from the clergy, having concluded that Anglicanism was incompatible with his philosophical beliefs. In that same year, Venn was elected a Fellow of the Royal Society and was awarded a Sc. D. by Cambridge. Venn is commemorated at the University of Hull by the Venn Building, built in 1928 A stained glass window in the hall of Gonville and Caius College, Cambridge. Venn then further developed George Booles theories in the 1881 work Symbolic Logic, Venn compiled Alumni Cantabrigienses, a biographical register of former members of the University of Cambridge. This work is still being updated online, see #External links and his other works include, A Cambridge Alumni Database The Venn archives clarify the confusing timeline of the various Venns. Obituary of John Venn Portrait of Venn by Charles Brock, and a link to a site about Venn Another view of the Venn stained glass window John Venn at Find a Grave
24.
Set theory
–
Set theory is a branch of mathematical logic that studies sets, which informally are collections of objects. Although any type of object can be collected into a set, set theory is applied most often to objects that are relevant to mathematics, the language of set theory can be used in the definitions of nearly all mathematical objects. The modern study of set theory was initiated by Georg Cantor, Set theory is commonly employed as a foundational system for mathematics, particularly in the form of Zermelo–Fraenkel set theory with the axiom of choice. Beyond its foundational role, set theory is a branch of mathematics in its own right, contemporary research into set theory includes a diverse collection of topics, ranging from the structure of the real number line to the study of the consistency of large cardinals. Mathematical topics typically emerge and evolve through interactions among many researchers, Set theory, however, was founded by a single paper in 1874 by Georg Cantor, On a Property of the Collection of All Real Algebraic Numbers. Since the 5th century BC, beginning with Greek mathematician Zeno of Elea in the West and early Indian mathematicians in the East, especially notable is the work of Bernard Bolzano in the first half of the 19th century. Modern understanding of infinity began in 1867–71, with Cantors work on number theory, an 1872 meeting between Cantor and Richard Dedekind influenced Cantors thinking and culminated in Cantors 1874 paper. Cantors work initially polarized the mathematicians of his day, while Karl Weierstrass and Dedekind supported Cantor, Leopold Kronecker, now seen as a founder of mathematical constructivism, did not. This utility of set theory led to the article Mengenlehre contributed in 1898 by Arthur Schoenflies to Kleins encyclopedia, in 1899 Cantor had himself posed the question What is the cardinal number of the set of all sets. Russell used his paradox as a theme in his 1903 review of continental mathematics in his The Principles of Mathematics, in 1906 English readers gained the book Theory of Sets of Points by William Henry Young and his wife Grace Chisholm Young, published by Cambridge University Press. The momentum of set theory was such that debate on the paradoxes did not lead to its abandonment, the work of Zermelo in 1908 and Abraham Fraenkel in 1922 resulted in the set of axioms ZFC, which became the most commonly used set of axioms for set theory. The work of such as Henri Lebesgue demonstrated the great mathematical utility of set theory. Set theory is used as a foundational system, although in some areas category theory is thought to be a preferred foundation. Set theory begins with a binary relation between an object o and a set A. If o is a member of A, the notation o ∈ A is used, since sets are objects, the membership relation can relate sets as well. A derived binary relation between two sets is the relation, also called set inclusion. If all the members of set A are also members of set B, then A is a subset of B, for example, is a subset of, and so is but is not. As insinuated from this definition, a set is a subset of itself, for cases where this possibility is unsuitable or would make sense to be rejected, the term proper subset is defined
25.
Probability
–
Probability is the measure of the likelihood that an event will occur. Probability is quantified as a number between 0 and 1, the higher the probability of an event, the more certain that the event will occur. A simple example is the tossing of a fair coin, since the coin is unbiased, the two outcomes are both equally probable, the probability of head equals the probability of tail. Since no other outcomes are possible, the probability is 1/2 and this type of probability is also called a priori probability. Probability theory is used to describe the underlying mechanics and regularities of complex systems. For example, tossing a coin twice will yield head-head, head-tail, tail-head. The probability of getting an outcome of head-head is 1 out of 4 outcomes or 1/4 or 0.25 and this interpretation considers probability to be the relative frequency in the long run of outcomes. A modification of this is propensity probability, which interprets probability as the tendency of some experiment to yield a certain outcome, subjectivists assign numbers per subjective probability, i. e. as a degree of belief. The degree of belief has been interpreted as, the price at which you would buy or sell a bet that pays 1 unit of utility if E,0 if not E. The most popular version of subjective probability is Bayesian probability, which includes expert knowledge as well as data to produce probabilities. The expert knowledge is represented by some prior probability distribution and these data are incorporated in a likelihood function. The product of the prior and the likelihood, normalized, results in a probability distribution that incorporates all the information known to date. The scientific study of probability is a development of mathematics. Gambling shows that there has been an interest in quantifying the ideas of probability for millennia, there are reasons of course, for the slow development of the mathematics of probability. Whereas games of chance provided the impetus for the study of probability. According to Richard Jeffrey, Before the middle of the century, the term probable meant approvable. A probable action or opinion was one such as people would undertake or hold. However, in legal contexts especially, probable could also apply to propositions for which there was good evidence, the sixteenth century Italian polymath Gerolamo Cardano demonstrated the efficacy of defining odds as the ratio of favourable to unfavourable outcomes
26.
Linguistics
–
Linguistics is the scientific study of language, and involves an analysis of language form, language meaning, and language in context. Linguists traditionally analyse human language by observing an interplay between sound and meaning, phonetics is the study of speech and non-speech sounds, and delves into their acoustic and articulatory properties. While the study of semantics typically concerns itself with truth conditions, Grammar is a system of rules which governs the production and use of utterances in a given language. These rules apply to sound as well as meaning, and include componential sub-sets of rules, such as those pertaining to phonology, morphology, modern theories that deal with the principles of grammar are largely based within Noam Chomskys ideological school of generative grammar. In the early 20th century, Ferdinand de Saussure distinguished between the notions of langue and parole in his formulation of structural linguistics. According to him, parole is the utterance of speech, whereas langue refers to an abstract phenomenon that theoretically defines the principles. This distinction resembles the one made by Noam Chomsky between competence and performance in his theory of transformative or generative grammar. According to Chomsky, competence is an innate capacity and potential for language, while performance is the specific way in which it is used by individuals, groups. The study of parole is the domain of sociolinguistics, the sub-discipline that comprises the study of a system of linguistic facets within a certain speech community. Discourse analysis further examines the structure of texts and conversations emerging out of a speech communitys usage of language, Stylistics also involves the study of written, signed, or spoken discourse through varying speech communities, genres, and editorial or narrative formats in the mass media. In the 1960s, Jacques Derrida, for instance, further distinguished between speech and writing, by proposing that language be studied as a linguistic medium of communication in itself. Palaeography is therefore the discipline that studies the evolution of scripts in language. Linguistics also deals with the social, cultural, historical and political factors that influence language, through which linguistic, research on language through the sub-branches of historical and evolutionary linguistics also focus on how languages change and grow, particularly over an extended period of time. Language documentation combines anthropological inquiry with linguistic inquiry, in order to describe languages, lexicography involves the documentation of words that form a vocabulary. Such a documentation of a vocabulary from a particular language is usually compiled in a dictionary. Computational linguistics is concerned with the statistical or rule-based modeling of natural language from a computational perspective, specific knowledge of language is applied by speakers during the act of translation and interpretation, as well as in language education – the teaching of a second or foreign language. Policy makers work with governments to implement new plans in education, related areas of study also includes the disciplines of semiotics, literary criticism, translation, and speech-language pathology. Before the 20th century, the philology, first attested in 1716, was commonly used to refer to the science of language
27.
Computer science
–
Computer science is the study of the theory, experimentation, and engineering that form the basis for the design and use of computers. An alternate, more succinct definition of science is the study of automating algorithmic processes that scale. A computer scientist specializes in the theory of computation and the design of computational systems and its fields can be divided into a variety of theoretical and practical disciplines. Some fields, such as computational complexity theory, are highly abstract, other fields still focus on challenges in implementing computation. Human–computer interaction considers the challenges in making computers and computations useful, usable, the earliest foundations of what would become computer science predate the invention of the modern digital computer. Machines for calculating fixed numerical tasks such as the abacus have existed since antiquity, further, algorithms for performing computations have existed since antiquity, even before the development of sophisticated computing equipment. Wilhelm Schickard designed and constructed the first working mechanical calculator in 1623, in 1673, Gottfried Leibniz demonstrated a digital mechanical calculator, called the Stepped Reckoner. He may be considered the first computer scientist and information theorist, for, among other reasons and he started developing this machine in 1834, and in less than two years, he had sketched out many of the salient features of the modern computer. A crucial step was the adoption of a card system derived from the Jacquard loom making it infinitely programmable. Around 1885, Herman Hollerith invented the tabulator, which used punched cards to process statistical information, when the machine was finished, some hailed it as Babbages dream come true. During the 1940s, as new and more powerful computing machines were developed, as it became clear that computers could be used for more than just mathematical calculations, the field of computer science broadened to study computation in general. Computer science began to be established as an academic discipline in the 1950s. The worlds first computer science program, the Cambridge Diploma in Computer Science. The first computer science program in the United States was formed at Purdue University in 1962. Since practical computers became available, many applications of computing have become distinct areas of study in their own rights and it is the now well-known IBM brand that formed part of the computer science revolution during this time. IBM released the IBM704 and later the IBM709 computers, still, working with the IBM was frustrating if you had misplaced as much as one letter in one instruction, the program would crash, and you would have to start the whole process over again. During the late 1950s, the science discipline was very much in its developmental stages. Time has seen significant improvements in the usability and effectiveness of computing technology, modern society has seen a significant shift in the users of computer technology, from usage only by experts and professionals, to a near-ubiquitous user base
28.
Union (set theory)
–
In set theory, the union of a collection of sets is the set of all elements in the collection. It is one of the operations through which sets can be combined and related to each other. For explanation of the used in this article, refer to the table of mathematical symbols. The union of two sets A and B is the set of elements which are in A, in B, for example, if A = and B = then A ∪ B =. Sets cannot have duplicate elements, so the union of the sets and is, multiple occurrences of identical elements have no effect on the cardinality of a set or its contents. Binary union is an operation, that is, A ∪ = ∪ C. The operations can be performed in any order, and the parentheses may be omitted without ambiguity, similarly, union is commutative, so the sets can be written in any order. The empty set is an identity element for the operation of union and that is, A ∪ ∅ = A, for any set A. This follows from analogous facts about logical disjunction, since sets with unions and intersections form a Boolean algebra, intersection distributes over union A ∩ = ∪ and union distributes over intersection A ∪ = ∩. One can take the union of several sets simultaneously, for example, the union of three sets A, B, and C contains all elements of A, all elements of B, and all elements of C, and nothing else. Thus, x is an element of A ∪ B ∪ C if and only if x is in at least one of A, B, and C. In mathematics a finite union means any union carried out on a number of sets. The most general notion is the union of a collection of sets. If M is a set whose elements are themselves sets, then x is an element of the union of M if, in symbols, x ∈ ⋃ M ⟺ ∃ A ∈ M, x ∈ A. This idea subsumes the preceding sections, in that A ∪ B ∪ C is the union of the collection, also, if M is the empty collection, then the union of M is the empty set. The notation for the concept can vary considerably. For a finite union of sets S1, S2, S3, …, S n one often writes S1 ∪ S2 ∪ S3 ∪ ⋯ ∪ S n or ⋃ i =1 n S i. In the case that the index set I is the set of natural numbers, whenever the symbol ∪ is placed before other symbols instead of between them, it is of a larger size
29.
Intersection (set theory)
–
In mathematics, the intersection A ∩ B of two sets A and B is the set that contains all elements of A that also belong to B, but no other elements. For explanation of the used in this article, refer to the table of mathematical symbols. The intersection of A and B is written A ∩ B, formally, A ∩ B = that is x ∈ A ∩ B if and only if x ∈ A and x ∈ B. For example, The intersection of the sets and is, the number 9 is not in the intersection of the set of prime numbers and the set of odd numbers. More generally, one can take the intersection of sets at once. The intersection of A, B, C, and D, Intersection is an associative operation, thus, A ∩ = ∩ C. Additionally, intersection is commutative, thus A ∩ B = B ∩ A, inside a universe U one may define the complement Ac of A to be the set of all elements of U not in A. We say that A intersects B if A intersects B at some element, a intersects B if their intersection is inhabited. We say that A and B are disjoint if A does not intersect B, in plain language, they have no elements in common. A and B are disjoint if their intersection is empty, denoted A ∩ B = ∅, for example, the sets and are disjoint, the set of even numbers intersects the set of multiples of 3 at 0,6,12,18 and other numbers. The most general notion is the intersection of a nonempty collection of sets. If M is a nonempty set whose elements are themselves sets, then x is an element of the intersection of M if, the notation for this last concept can vary considerably. Set theorists will sometimes write ⋂M, while others will instead write ⋂A∈M A, the latter notation can be generalized to ⋂i∈I Ai, which refers to the intersection of the collection. Here I is a nonempty set, and Ai is a set for every i in I. In the case that the index set I is the set of numbers, notation analogous to that of an infinite series may be seen. When formatting is difficult, this can also be written A1 ∩ A2 ∩ A3 ∩, even though strictly speaking, A1 ∩ (A2 ∩ (A3 ∩. Finally, let us note that whenever the symbol ∩ is placed before other symbols instead of them, it should be of a larger size. Note that in the section we excluded the case where M was the empty set
30.
Stained-glass
–
The term stained glass can refer to coloured glass as a material or to works created from it. Throughout its thousand-year history, the term has been applied almost exclusively to the windows of churches, mosques, although traditionally made in flat panels and used as windows, the creations of modern stained glass artists also include three-dimensional structures and sculpture. Modern vernacular usage has extended the term stained glass to include domestic leadlight. As a material stained glass is glass that has been coloured by adding metallic salts during its manufacture. The coloured glass is crafted into stained glass windows in which pieces of glass are arranged to form patterns or pictures, held together by strips of lead. Painted details and yellow stain are used to enhance the design. The term stained glass is applied to windows in which the colours have been painted onto the glass. Stained glass, as an art and a craft, requires the artistic skill to conceive an appropriate and workable design, and the engineering skills to assemble the piece. A window must fit snugly into the space for which it is made, must resist wind and rain, Many large windows have withstood the test of time and remained substantially intact since the late Middle Ages. In Western Europe they constitute the form of pictorial art to have survived. In this context, the purpose of a glass window is not to allow those within a building to see the world outside or even primarily to admit light. For this reason stained glass windows have been described as illuminated wall decorations, Stained glass is still popular today, but often referred to as art glass. It is prevalent in luxury homes, commercial buildings, and places of worship, artists and companies are contracted to create beautiful art glass ranging from domes, windows, backsplashes, etc. During the late Medieval period, glass factories were set up there was a ready supply of silica. Silica requires very high heat to become molten, something not all glass factories were able to achieve, such materials as potash, soda, and lead can be added to lower the melting temperature. Other substances, such as lime, are added to rebuild the weakened network, Glass is coloured by adding metallic oxide powders or finely divided metals while it is in a molten state. Copper oxides produce green or bluish green, cobalt makes deep blue, much modern red glass is produced using copper, which is less expensive than gold and gives a brighter, more vermilion shade of red. Glass coloured while in the pot in the furnace is known as pot metal glass
31.
Formal logic
–
Logic, originally meaning the word or what is spoken, is generally held to consist of the systematic study of the form of arguments. A valid argument is one where there is a relation of logical support between the assumptions of the argument and its conclusion. Historically, logic has been studied in philosophy and mathematics, and recently logic has been studied in science, linguistics, psychology. The concept of form is central to logic. The validity of an argument is determined by its logical form, traditional Aristotelian syllogistic logic and modern symbolic logic are examples of formal logic. Informal logic is the study of natural language arguments, the study of fallacies is an important branch of informal logic. Since much informal argument is not strictly speaking deductive, on some conceptions of logic, formal logic is the study of inference with purely formal content. An inference possesses a purely formal content if it can be expressed as an application of a wholly abstract rule, that is. The works of Aristotle contain the earliest known study of logic. Modern formal logic follows and expands on Aristotle, in many definitions of logic, logical inference and inference with purely formal content are the same. This does not render the notion of informal logic vacuous, because no formal logic captures all of the nuances of natural language, Symbolic logic is the study of symbolic abstractions that capture the formal features of logical inference. Symbolic logic is divided into two main branches, propositional logic and predicate logic. Mathematical logic is an extension of logic into other areas, in particular to the study of model theory, proof theory, set theory. Logic is generally considered formal when it analyzes and represents the form of any valid argument type, the form of an argument is displayed by representing its sentences in the formal grammar and symbolism of a logical language to make its content usable in formal inference. Simply put, formalising simply means translating English sentences into the language of logic and this is called showing the logical form of the argument. It is necessary because indicative sentences of ordinary language show a variety of form. Second, certain parts of the sentence must be replaced with schematic letters, thus, for example, the expression all Ps are Qs shows the logical form common to the sentences all men are mortals, all cats are carnivores, all Greeks are philosophers, and so on. The schema can further be condensed into the formula A, where the letter A indicates the judgement all - are -, the importance of form was recognised from ancient times
32.
Frank Ruskey
–
Frank Ruskey is a combinatorialist and computer scientist, and professor at the University of Victoria. His research involves algorithms for exhaustively listing discrete structures, combinatorial Gray codes, Venn and Euler diagrams, combinatorics on words, Frank Ruskey is the author of the Combinatorial Object Server, a website for information on and generation of combinatorial objects. On Rotations and the Generation of Binary Trees, Frank Ruskeys homepage Combinatorial Object Server Combinatorial Generation. CiteSeerX10.1.1.93.5967, on combinatorics Frank Ruskey at the Mathematics Genealogy Project
33.
Leonhard Euler
–
He also introduced much of the modern mathematical terminology and notation, particularly for mathematical analysis, such as the notion of a mathematical function. He is also known for his work in mechanics, fluid dynamics, optics, astronomy, Euler was one of the most eminent mathematicians of the 18th century, and is held to be one of the greatest in history. He is also considered to be the most prolific mathematician of all time. His collected works fill 60 to 80 quarto volumes, more than anybody in the field and he spent most of his adult life in Saint Petersburg, Russia, and in Berlin, then the capital of Prussia. A statement attributed to Pierre-Simon Laplace expresses Eulers influence on mathematics, Read Euler, read Euler, Leonhard Euler was born on 15 April 1707, in Basel, Switzerland to Paul III Euler, a pastor of the Reformed Church, and Marguerite née Brucker, a pastors daughter. He had two sisters, Anna Maria and Maria Magdalena, and a younger brother Johann Heinrich. Soon after the birth of Leonhard, the Eulers moved from Basel to the town of Riehen, Paul Euler was a friend of the Bernoulli family, Johann Bernoulli was then regarded as Europes foremost mathematician, and would eventually be the most important influence on young Leonhard. Eulers formal education started in Basel, where he was sent to live with his maternal grandmother. In 1720, aged thirteen, he enrolled at the University of Basel, during that time, he was receiving Saturday afternoon lessons from Johann Bernoulli, who quickly discovered his new pupils incredible talent for mathematics. In 1726, Euler completed a dissertation on the propagation of sound with the title De Sono, at that time, he was unsuccessfully attempting to obtain a position at the University of Basel. In 1727, he first entered the Paris Academy Prize Problem competition, Pierre Bouguer, who became known as the father of naval architecture, won and Euler took second place. Euler later won this annual prize twelve times, around this time Johann Bernoullis two sons, Daniel and Nicolaus, were working at the Imperial Russian Academy of Sciences in Saint Petersburg. In November 1726 Euler eagerly accepted the offer, but delayed making the trip to Saint Petersburg while he applied for a physics professorship at the University of Basel. Euler arrived in Saint Petersburg on 17 May 1727 and he was promoted from his junior post in the medical department of the academy to a position in the mathematics department. He lodged with Daniel Bernoulli with whom he worked in close collaboration. Euler mastered Russian and settled life in Saint Petersburg. He also took on a job as a medic in the Russian Navy. The Academy at Saint Petersburg, established by Peter the Great, was intended to improve education in Russia, as a result, it was made especially attractive to foreign scholars like Euler
34.
Gottfried Wilhelm Leibniz
–
Leibnizs notation has been widely used ever since it was published. It was only in the 20th century that his Law of Continuity and he became one of the most prolific inventors in the field of mechanical calculators. He also refined the number system, which is the foundation of virtually all digital computers. Leibniz, along with René Descartes and Baruch Spinoza, was one of the three great 17th-century advocates of rationalism and he wrote works on philosophy, politics, law, ethics, theology, history, and philology. Leibnizs contributions to this vast array of subjects were scattered in various learned journals, in tens of thousands of letters and he wrote in several languages, but primarily in Latin, French, and German. There is no complete gathering of the writings of Leibniz in English, Gottfried Leibniz was born on July 1,1646, toward the end of the Thirty Years War, in Leipzig, Saxony, to Friedrich Leibniz and Catharina Schmuck. Friedrich noted in his journal,21. Juny am Sontag 1646 Ist mein Sohn Gottfried Wilhelm, post sextam vespertinam 1/4 uff 7 uhr abents zur welt gebohren, in English, On Sunday 21 June 1646, my son Gottfried Wilhelm is born into the world a quarter after six in the evening, in Aquarius. Leibniz was baptized on July 3 of that year at St. Nicholas Church, Leipzig and his father died when he was six and a half years old, and from that point on he was raised by his mother. Her teachings influenced Leibnizs philosophical thoughts in his later life, Leibnizs father had been a Professor of Moral Philosophy at the University of Leipzig, and the boy later inherited his fathers personal library. He was given access to it from the age of seven. Access to his fathers library, largely written in Latin, also led to his proficiency in the Latin language and he also composed 300 hexameters of Latin verse, in a single morning, for a special event at school at the age of 13. In April 1661 he enrolled in his fathers former university at age 15 and he defended his Disputatio Metaphysica de Principio Individui, which addressed the principle of individuation, on June 9,1663. Leibniz earned his masters degree in Philosophy on February 7,1664, after one year of legal studies, he was awarded his bachelors degree in Law on September 28,1665. His dissertation was titled De conditionibus, in early 1666, at age 19, Leibniz wrote his first book, De Arte Combinatoria, the first part of which was also his habilitation thesis in Philosophy, which he defended in March 1666. His next goal was to earn his license and Doctorate in Law, in 1666, the University of Leipzig turned down Leibnizs doctoral application and refused to grant him a Doctorate in Law, most likely due to his relative youth. Leibniz then enrolled in the University of Altdorf and quickly submitted a thesis, the title of his thesis was Disputatio Inauguralis de Casibus Perplexis in Jure. Leibniz earned his license to practice law and his Doctorate in Law in November 1666 and he next declined the offer of an academic appointment at Altdorf, saying that my thoughts were turned in an entirely different direction
35.
Ramon Llull
–
Ramon Llull, T. O. S. F. was a philosopher, logician, Franciscan tertiary and Majorcan writer. He is credited with writing the first major work of Catalan literature, recently surfaced manuscripts show his work to have predated by several centuries prominent work on elections theory. He is also considered a pioneer of computation theory, especially given his influence on Leibniz, within the Franciscan Order he is honored as a martyr. He was beatified in 1847 by Pope Pius IX and his feast day was assigned to 30 June and is celebrated by the Third Order of St. Francis. Llull was born into a family in Palma, the capital of the newly formed Kingdom of Majorca. James I of Aragon founded Majorca to integrate the conquered territories of the Balearic Islands into the Crown of Aragon. Llulls parents had come from Catalonia as part of the effort to colonize the formerly Almohad ruled island, in 1257 he married Blanca Picany, with whom he had two children, Domènec and Magdalena. Although he formed a family, he lived what he would call the licentious. Llull served as tutor to James II of Aragon and later became Seneschal to the future King James II of Majorca, in 1263 Llull experienced a religious epiphany in the form of a series of visions. The vision came to him six times in all, leading him to leave his family, position, following his epiphany Llull became a Franciscan tertiary, taking inspiration from Saint Francis of Assisi. After a short pilgrimage he returned to Majorca, where he purchased a Muslim slave from whom he wanted to learn Arabic, for the next nine years, until 1274, he engaged in study and contemplation in relative solitude. He read extensively in both Latin and Arabic, learning both Christian and Muslim theological and philosophical thought. Between 1271 and 1274 he wrote his first works, a compendium of the Muslim thinker Al-Ghazalis logic and the Llibre de contemplació en Déu and his first elucidation of the Art was in Art Abreujada dAtrobar Veritat, in 1290. After spending some time teaching in France and being disappointed by the reception of his Art among students. It is this version that he became known for. It is most clearly presented in his Ars generalis ultima or Ars magna, the Art operated by combining religious and philosophical attributes selected from a number of lists. It is believed that Llulls inspiration for the Ars magna came from observing Arab astrologers use a device called a zairja, the Art was intended as a debating tool for winning Muslims to the Christian faith through logic and reason. Through his detailed analytical efforts, Llull built an in-depth theosophic reference by which a reader could enter any argument or question, the reader then used visual aids and a book of charts to combine various ideas, generating statements which came together to form an answer
36.
Rotational symmetry
–
Rotational symmetry, also known as radial symmetry in biology, is the property a shape has when it looks the same after some rotation by a partial turn. An objects degree of symmetry is the number of distinct orientations in which it looks the same. Formally the rotational symmetry is symmetry with respect to some or all rotations in m-dimensional Euclidean space, rotations are direct isometries, i. e. isometries preserving orientation. With the modified notion of symmetry for vector fields the symmetry group can also be E+, for symmetry with respect to rotations about a point we can take that point as origin. These rotations form the orthogonal group SO, the group of m×m orthogonal matrices with determinant 1. For m =3 this is the rotation group SO, for chiral objects it is the same as the full symmetry group. Laws of physics are SO-invariant if they do not distinguish different directions in space, because of Noethers theorem, rotational symmetry of a physical system is equivalent to the angular momentum conservation law. Note that 1-fold symmetry is no symmetry, the notation for n-fold symmetry is Cn or simply n. The actual symmetry group is specified by the point or axis of symmetry, for each point or axis of symmetry, the abstract group type is cyclic group of order n, Zn. The fundamental domain is a sector of 360°/n, if there is e. g. rotational symmetry with respect to an angle of 100°, then also with respect to one of 20°, the greatest common divisor of 100° and 360°. A typical 3D object with rotational symmetry but no mirror symmetry is a propeller and this is the rotation group of a regular prism, or regular bipyramid. 4×3-fold and 3×2-fold axes, the rotation group T of order 12 of a regular tetrahedron, the group is isomorphic to alternating group A4. 3×4-fold, 4×3-fold, and 6×2-fold axes, the rotation group O of order 24 of a cube, the group is isomorphic to symmetric group S4. 6×5-fold, 10×3-fold, and 15×2-fold axes, the rotation group I of order 60 of a dodecahedron, the group is isomorphic to alternating group A5. The group contains 10 versions of D3 and 6 versions of D5, in the case of the Platonic solids, the 2-fold axes are through the midpoints of opposite edges, the number of them is half the number of edges. Rotational symmetry with respect to any angle is, in two dimensions, circular symmetry, the fundamental domain is a half-line. In three dimensions we can distinguish cylindrical symmetry and spherical symmetry and that is, no dependence on the angle using cylindrical coordinates and no dependence on either angle using spherical coordinates. The fundamental domain is a half-plane through the axis, and a radial half-line, axisymmetric or axisymmetrical are adjectives which refer to an object having cylindrical symmetry, or axisymmetry
37.
Prime number
–
A prime number is a natural number greater than 1 that has no positive divisors other than 1 and itself. A natural number greater than 1 that is not a number is called a composite number. For example,5 is prime because 1 and 5 are its only positive integer factors, the property of being prime is called primality. A simple but slow method of verifying the primality of a number n is known as trial division. It consists of testing whether n is a multiple of any integer between 2 and n, algorithms much more efficient than trial division have been devised to test the primality of large numbers. Particularly fast methods are available for numbers of forms, such as Mersenne numbers. As of January 2016, the largest known prime number has 22,338,618 decimal digits, there are infinitely many primes, as demonstrated by Euclid around 300 BC. There is no simple formula that separates prime numbers from composite numbers. However, the distribution of primes, that is to say, many questions regarding prime numbers remain open, such as Goldbachs conjecture, and the twin prime conjecture. Such questions spurred the development of branches of number theory. Prime numbers give rise to various generalizations in other domains, mainly algebra, such as prime elements. A natural number is called a number if it has exactly two positive divisors,1 and the number itself. Natural numbers greater than 1 that are not prime are called composite, among the numbers 1 to 6, the numbers 2,3, and 5 are the prime numbers, while 1,4, and 6 are not prime. 1 is excluded as a number, for reasons explained below. 2 is a number, since the only natural numbers dividing it are 1 and 2. Next,3 is prime, too,1 and 3 do divide 3 without remainder, however,4 is composite, since 2 is another number dividing 4 without remainder,4 =2 ·2. 5 is again prime, none of the numbers 2,3, next,6 is divisible by 2 or 3, since 6 =2 ·3. The image at the right illustrates that 12 is not prime,12 =3 ·4, no even number greater than 2 is prime because by definition, any such number n has at least three distinct divisors, namely 1,2, and n
38.
New Math
–
New Mathematics or New Math was a brief, dramatic change in the way mathematics was taught in American grade schools, and to a lesser extent in European countries, during the 1960s. The phrase is used now to describe any short-lived fad which quickly became highly discredited. The name is given to a set of teaching practices introduced in the U. S. Topics introduced in the New Math include modular arithmetic, algebraic inequalities, bases other than 10, matrices, symbolic logic, Boolean algebra, All of these topics have been greatly de-emphasized or eliminated in U. S. elementary schools and high schools curricula since the 1960s. Quine wrote that the air of Cantorian set theory was not to be associated with the New Math. According to Quine, the New Math involved merely the Boolean algebra of classes, though the New Math did not succeed in its time, it did reflect on great developments occurring in society. Boolean logic is an ingredient of digital design and binary data are the machine level representation of the data managed in digital machines. The Boolean logic and the rules of sets would later prove to be very valuable with the onset of databases, in particular, the notion of relation as used in relational databases is a realization of a variant of the idea of n-ary relation in set theory. In this and other ways, the New Math proved to be an important link to the computer revolution and this naturally includes all manner of programming. In this sense, the New Math was ahead of its time, many programmers of the 1980s and later hearkened back to their experience with the New Math. The material also put new demands on teachers, many of whom were required to teach material they did not fully understand, parents were concerned that they did not understand what their children were learning and could not help them with their studies. In an effort to learn the material, many parents attended their childrens classes, New Math found some later success in the form of enrichment programs for gifted students from the 1980s onward in Project MEGSSS. I dont think it is worth while teaching such material, in 1973, Morris Kline published his critical book Why Johnny Cant Add, the Failure of the New Math. It explains the desire to be relevant with mathematics representing something more modern than traditional topics, furthermore, noting the trend to abstraction in New Math, Kline says abstraction is not the first stage, but the last stage, in a mathematical development. In West Germany the changes were seen as part of a process of Bildungsreform. Again, the met with a mixed reception, but for different reasons. For example, the end-users of mathematics studies were at that time mostly in the sciences and engineering. Some compromises have since been required, given that discrete mathematics is the language of computing
39.
Symmetric difference
–
In mathematics, the symmetric difference, also known as the disjunctive union, of two sets is the set of elements which are in either of the sets and not in their intersection. The symmetric difference of the sets A and B is commonly denoted by A △ B, or A ⊖ B, for example, the symmetric difference of the sets and is. The power set of any set becomes a Boolean ring with symmetric difference as the addition of the ring and intersection as the multiplication of the ring. Furthermore, if we denote D = A △ B and I = A ∩ B, then D and I are always disjoint, the symmetric difference is commutative and associative, A △ B = B △ A, △ C = A △. The empty set is neutral, and every set is its own inverse, taken together, we see that the power set of any set X becomes an abelian group if we use the symmetric difference as operation. A group in every element is its own inverse is sometimes called a Boolean group. Sometimes the Boolean group is defined as the symmetric difference operation on a set. In the case where X has only two elements, the thus obtained is the Klein four-group. Equivalently, a Boolean group is an Elementary abelian 2-group, consequently, the group induced by the symmetric difference is in fact a vector space over the field with 2 elements Z2. If X is finite, then the form a basis of this vector space. This construction is used in theory, to define the cycle space of a graph. In particular, △ = A △ C and this implies triangle inequality, the symmetric difference of A and C is contained in the union of the symmetric difference of A and B and that of B and C. Intersection distributes over symmetric difference, A ∩ = △, and this is the prototypical example of a Boolean ring. Further properties of the difference, A △ B = A c △ B c. △ ⊆ ⋃ α ∈ I, where I is an arbitrary non-empty index set, if f, S → T is any function and A, B ⊆ T are any sets in f s codomain, then f −1 = f −1 △ f −1. The symmetric difference can be defined in any Boolean algebra, by writing x △ y = ∧ ¬ = ∨ = x ⊕ y and this operation has the same properties as the symmetric difference of sets. The repeated symmetric difference is in an equivalent to an operation on a multiset of sets giving the set of elements which are in an odd number of sets. As above, the difference of a collection of sets contains just elements which are in an odd number of the sets in the collection
40.
Complement (set theory)
–
In set theory, the complement of a set A refers to elements not in A. The relative complement of A with respect to a set B, also termed the difference of sets A and B, written B ∖ A, is the set of elements in B but not in A. When all sets under consideration are considered to be subsets of a given set U, the absolute complement of A is the set of elements in U but not in A. If A and B are sets, then the complement of A in B, also termed the set-theoretic difference of B and A, is the set of elements in B. The relative complement of A in B is denoted B ∖ A according to the ISO 31-11 standard, if R is the set of real numbers and Q is the set of rational numbers, then R ∖ Q is the set of irrational numbers. Let A, B, and C be three sets, the following identities capture notable properties of relative complements, C ∖ = ∪. C ∖ = ∪, with the important special case C ∖ = demonstrating that intersection can be expressed using only the relative complement operation. If A is a set, then the complement of A is the set of elements not in A. Formally. The absolute complement of A is usually denoted by A ∁, other notations include A c, A ¯, A ′, ∁ U A, and ∁ A. Assume that the universe is the set of integers, if A is the set of odd numbers, then the complement of A is the set of even numbers. If B is the set of multiples of 3, then the complement of B is the set of numbers congruent to 1 or 2 modulo 3, assume that the universe is the standard 52-card deck. If the set A is the suit of spades, then the complement of A is the union of the suits of clubs, diamonds, and hearts. If the set B is the union of the suits of clubs and diamonds, then the complement of B is the union of the suits of hearts, let A and B be two sets in a universe U. The following identities capture important properties of complements, De Morgans laws. Complement laws, A ∪ A ∁ = U, if A ⊂ B, then B ∁ ⊂ A ∁. Involution or double complement law, ∁ = A, relationships between relative and absolute complements, A ∖ B = A ∩ B ∁. Relationship with set difference, A ∁ ∖ B ∁ = B ∖ A, the first two complement laws above show that if A is a non-empty, proper subset of U, then is a partition of U. In the LaTeX typesetting language, the command \setminus is usually used for rendering a set difference symbol, when rendered, the \setminus command looks identical to \backslash except that it has a little more space in front and behind the slash, akin to the LaTeX sequence \mathbin