1.
Mathematics
–
Mathematics is the study of topics such as quantity, structure, space, and change. There is a range of views among mathematicians and philosophers as to the exact scope, Mathematicians seek out patterns and use them to formulate new conjectures. Mathematicians resolve the truth or falsity of conjectures by mathematical proof, when mathematical structures are good models of real phenomena, then mathematical reasoning can provide insight or predictions about nature. Through the use of abstraction and logic, mathematics developed from counting, calculation, measurement, practical mathematics has been a human activity from as far back as written records exist. The research required to solve mathematical problems can take years or even centuries of sustained inquiry, rigorous arguments first appeared in Greek mathematics, most notably in Euclids Elements. Galileo Galilei said, The universe cannot be read until we have learned the language and it is written in mathematical language, and the letters are triangles, circles and other geometrical figures, without which means it is humanly impossible to comprehend a single word. Without these, one is wandering about in a dark labyrinth, carl Friedrich Gauss referred to mathematics as the Queen of the Sciences. Benjamin Peirce called mathematics the science that draws necessary conclusions, David Hilbert said of mathematics, We are not speaking here of arbitrariness in any sense. Mathematics is not like a game whose tasks are determined by arbitrarily stipulated rules, rather, it is a conceptual system possessing internal necessity that can only be so and by no means otherwise. Albert Einstein stated that as far as the laws of mathematics refer to reality, they are not certain, Mathematics is essential in many fields, including natural science, engineering, medicine, finance and the social sciences. Applied mathematics has led to entirely new mathematical disciplines, such as statistics, Mathematicians also engage in pure mathematics, or mathematics for its own sake, without having any application in mind. There is no clear line separating pure and applied mathematics, the history of mathematics can be seen as an ever-increasing series of abstractions. The earliest uses of mathematics were in trading, land measurement, painting and weaving patterns, in Babylonian mathematics elementary arithmetic first appears in the archaeological record. Numeracy pre-dated writing and numeral systems have many and diverse. Between 600 and 300 BC the Ancient Greeks began a study of mathematics in its own right with Greek mathematics. Mathematics has since been extended, and there has been a fruitful interaction between mathematics and science, to the benefit of both. Mathematical discoveries continue to be made today, the overwhelming majority of works in this ocean contain new mathematical theorems and their proofs. The word máthēma is derived from μανθάνω, while the modern Greek equivalent is μαθαίνω, in Greece, the word for mathematics came to have the narrower and more technical meaning mathematical study even in Classical times
2.
Probability theory
–
Probability theory is the branch of mathematics concerned with probability, the analysis of random phenomena. It is not possible to predict precisely results of random events, two representative mathematical results describing such patterns are the law of large numbers and the central limit theorem. As a mathematical foundation for statistics, probability theory is essential to human activities that involve quantitative analysis of large sets of data. Methods of probability theory also apply to descriptions of complex systems given only partial knowledge of their state, a great discovery of twentieth century physics was the probabilistic nature of physical phenomena at atomic scales, described in quantum mechanics. Christiaan Huygens published a book on the subject in 1657 and in the 19th century, initially, probability theory mainly considered discrete events, and its methods were mainly combinatorial. Eventually, analytical considerations compelled the incorporation of continuous variables into the theory and this culminated in modern probability theory, on foundations laid by Andrey Nikolaevich Kolmogorov. Kolmogorov combined the notion of space, introduced by Richard von Mises. This became the mostly undisputed axiomatic basis for modern probability theory, most introductions to probability theory treat discrete probability distributions and continuous probability distributions separately. The more mathematically advanced measure theory-based treatment of probability covers the discrete, continuous, consider an experiment that can produce a number of outcomes. The set of all outcomes is called the space of the experiment. The power set of the space is formed by considering all different collections of possible results. For example, rolling an honest die produces one of six possible results, one collection of possible results corresponds to getting an odd number. Thus, the subset is an element of the set of the sample space of die rolls. In this case, is the event that the die falls on some odd number, If the results that actually occur fall in a given event, that event is said to have occurred. Probability is a way of assigning every event a value between zero and one, with the requirement that the event made up of all possible results be assigned a value of one, the probability that any one of the events, or will occur is 5/6. This is the same as saying that the probability of event is 5/6 and this event encompasses the possibility of any number except five being rolled. The mutually exclusive event has a probability of 1/6, and the event has a probability of 1, discrete probability theory deals with events that occur in countable sample spaces. Modern definition, The modern definition starts with a finite or countable set called the sample space, which relates to the set of all possible outcomes in classical sense, denoted by Ω
3.
Combinatorics
–
Combinatorics is a branch of mathematics concerning the study of finite or countable discrete structures. Many combinatorial questions have historically been considered in isolation, giving an ad hoc solution to a problem arising in some mathematical context. In the later twentieth century, however, powerful and general methods were developed. One of the oldest and most accessible parts of combinatorics is graph theory, Combinatorics is used frequently in computer science to obtain formulas and estimates in the analysis of algorithms. A mathematician who studies combinatorics is called a combinatorialist or a combinatorist, basic combinatorial concepts and enumerative results appeared throughout the ancient world. Greek historian Plutarch discusses an argument between Chrysippus and Hipparchus of a rather delicate enumerative problem, which was shown to be related to Schröder–Hipparchus numbers. In the Ostomachion, Archimedes considers a tiling puzzle, in the Middle Ages, combinatorics continued to be studied, largely outside of the European civilization. The Indian mathematician Mahāvīra provided formulae for the number of permutations and combinations, later, in Medieval England, campanology provided examples of what is now known as Hamiltonian cycles in certain Cayley graphs on permutations. During the Renaissance, together with the rest of mathematics and the sciences, works of Pascal, Newton, Jacob Bernoulli and Euler became foundational in the emerging field. In modern times, the works of J. J. Sylvester and Percy MacMahon helped lay the foundation for enumerative, graph theory also enjoyed an explosion of interest at the same time, especially in connection with the four color problem. In the second half of the 20th century, combinatorics enjoyed a rapid growth, in part, the growth was spurred by new connections and applications to other fields, ranging from algebra to probability, from functional analysis to number theory, etc. These connections shed the boundaries between combinatorics and parts of mathematics and theoretical science, but at the same time led to a partial fragmentation of the field. Enumerative combinatorics is the most classical area of combinatorics and concentrates on counting the number of combinatorial objects. Although counting the number of elements in a set is a rather broad mathematical problem, fibonacci numbers is the basic example of a problem in enumerative combinatorics. The twelvefold way provides a framework for counting permutations, combinations and partitions. Analytic combinatorics concerns the enumeration of combinatorial structures using tools from complex analysis, in contrast with enumerative combinatorics, which uses explicit combinatorial formulae and generating functions to describe the results, analytic combinatorics aims at obtaining asymptotic formulae. Partition theory studies various enumeration and asymptotic problems related to integer partitions, originally a part of number theory and analysis, it is now considered a part of combinatorics or an independent field. It incorporates the bijective approach and various tools in analysis and analytic number theory, graphs are basic objects in combinatorics
4.
Philippe Flajolet
–
Philippe Flajolet was a French computer scientist. A former student of École Polytechnique, Philippe Flajolet received his Ph. D. in computer science from University Paris Diderot in 1973, most of Philippe Flajolets research work was dedicated towards general methods for analyzing the computational complexity of algorithms, including the theory of average-case complexity. He introduced the theory of analytic combinatorics, with Robert Sedgewick of Princeton University, he wrote the first book-length treatment of the topic, the 2009 book entitled Analytic Combinatorics. A summary of his research up to 1998 can be found in the article Philippe Flajolets research in Combinatorics and Analysis of Algorithms by H. Prodinger and W. Szpankowski, Algorithmica 22, 366-387. At the time of his death from an illness, Philippe Flajolet was a research director at INRIA in Rocquencourt. From 1994 to 2003 he was a member of the French Academy of Sciences. He was also a member of the Academia Europaea, the HyperLogLog commands of Redis, released in April 2014, are prefixed with PF in honor of Philippe Flajolet. With Robert Sedgewick, An Introduction to the Analysis of Algorithms,1995, ISBN 0-201-40009-X with Robert Sedgewick, Analytic Combinatorics. CUP, Cambridge 2009, ISBN 978-0-521-89806-5 Random tree models in the analysis of algorithms, INRIA, Rocquencourt 1987 with Andrew Odlyzko, Singularity analysis of generating functions. 1988 Philippe Flajolets Home Page Philippe Flajolet and Analytic Combinatorics, Conference in the memory of Philippe Flajolet Luc Devroye, Philippe Flajolet,1 December 1948--22 March 2011
5.
Randomness
–
Randomness is the lack of pattern or predictability in events. A random sequence of events, symbols or steps has no order, individual random events are by definition unpredictable, but in many cases the frequency of different outcomes over a large number of events is predictable. For example, when throwing two dice, the outcome of any particular roll is unpredictable, but a sum of 7 will occur twice as often as 4. In this view, randomness is a measure of uncertainty of an outcome, rather than haphazardness, and applies to concepts of chance, probability, the fields of mathematics, probability, and statistics use formal definitions of randomness. In statistics, a variable is an assignment of a numerical value to each possible outcome of an event space. This association facilitates the identification and the calculation of probabilities of the events, Random variables can appear in random sequences. A random process is a sequence of variables whose outcomes do not follow a deterministic pattern. These and other constructs are extremely useful in probability theory and the applications of randomness. Randomness is most often used in statistics to signify well-defined statistical properties, Monte Carlo methods, which rely on random input, are important techniques in science, as, for instance, in computational science. By analogy, quasi-Monte Carlo methods use quasirandom number generators, Random selection is a method of selecting items from a population where the probability of choosing a specific item is the proportion of those items in the population. For example, with a bowl containing just 10 red marbles and 90 blue marbles, note that a random selection mechanism that selected 10 marbles from this bowl would not necessarily result in 1 red and 9 blue. In situations where a population consists of items that are distinguishable and that is, if the selection process is such that each member of a population, of say research subjects, has the same probability of being chosen then we can say the selection process is random. In ancient history, the concepts of chance and randomness were intertwined with that of fate, many ancient peoples threw dice to determine fate, and this later evolved into games of chance. Most ancient cultures used various methods of divination to attempt to circumvent randomness, the Chinese of 3000 years ago were perhaps the earliest people to formalize odds and chance. The Greek philosophers discussed randomness at length, but only in non-quantitative forms and it was only in the 16th century that Italian mathematicians began to formalize the odds associated with various games of chance. The invention of the calculus had a impact on the formal study of randomness. The early part of the 20th century saw a growth in the formal analysis of randomness. In the mid- to late-20th century, ideas of information theory introduced new dimensions to the field via the concept of algorithmic randomness
6.
Probability
–
Probability is the measure of the likelihood that an event will occur. Probability is quantified as a number between 0 and 1, the higher the probability of an event, the more certain that the event will occur. A simple example is the tossing of a fair coin, since the coin is unbiased, the two outcomes are both equally probable, the probability of head equals the probability of tail. Since no other outcomes are possible, the probability is 1/2 and this type of probability is also called a priori probability. Probability theory is used to describe the underlying mechanics and regularities of complex systems. For example, tossing a coin twice will yield head-head, head-tail, tail-head. The probability of getting an outcome of head-head is 1 out of 4 outcomes or 1/4 or 0.25 and this interpretation considers probability to be the relative frequency in the long run of outcomes. A modification of this is propensity probability, which interprets probability as the tendency of some experiment to yield a certain outcome, subjectivists assign numbers per subjective probability, i. e. as a degree of belief. The degree of belief has been interpreted as, the price at which you would buy or sell a bet that pays 1 unit of utility if E,0 if not E. The most popular version of subjective probability is Bayesian probability, which includes expert knowledge as well as data to produce probabilities. The expert knowledge is represented by some prior probability distribution and these data are incorporated in a likelihood function. The product of the prior and the likelihood, normalized, results in a probability distribution that incorporates all the information known to date. The scientific study of probability is a development of mathematics. Gambling shows that there has been an interest in quantifying the ideas of probability for millennia, there are reasons of course, for the slow development of the mathematics of probability. Whereas games of chance provided the impetus for the study of probability. According to Richard Jeffrey, Before the middle of the century, the term probable meant approvable. A probable action or opinion was one such as people would undertake or hold. However, in legal contexts especially, probable could also apply to propositions for which there was good evidence, the sixteenth century Italian polymath Gerolamo Cardano demonstrated the efficacy of defining odds as the ratio of favourable to unfavourable outcomes
7.
Product (mathematics)
–
In mathematics, a product is the result of multiplying, or an expression that identifies factors to be multiplied. Thus, for instance,6 is the product of 2 and 3, the order in which real or complex numbers are multiplied has no bearing on the product, this is known as the commutative law of multiplication. When matrices or members of various other associative algebras are multiplied, matrix multiplication, for example, and multiplication in other algebras is in general non-commutative. There are many different kinds of products in mathematics, besides being able to multiply just numbers, polynomials or matricies, an overview of these different kinds of products is given here. Placing several stones into a pattern with r rows and s columns gives r ⋅ s = ∑ i =1 s r = ∑ j =1 r s stones. Integers allow positive and negative numbers, the product of two quaternions can be found in the article on quaternions. However, it is interesting to note that in this case, the product operator for the product of a sequence is denoted by the capital Greek letter Pi ∏. The product of a sequence consisting of one number is just that number itself. The product of no factors at all is known as the empty product, commutative rings have a product operation. Under the Fourier transform, convolution becomes point-wise function multiplication, others have very different names but convey essentially the same idea. A brief overview of these is given here, by the very definition of a vector space, one can form the product of any scalar with any vector, giving a map R × V → V. A scalar product is a map, ⋅, V × V → R with the following conditions. From the scalar product, one can define a norm by letting ∥ v ∥, = v ⋅ v, now we consider the composition of two linear mappings between finite dimensional vector spaces. Let the linear mapping f map V to W, and let the linear mapping g map W to U, then one can get g ∘ f = g = g j k f i j v i b U k. Or in matrix form, g ∘ f = G F v, in which the i-row, j-column element of F, denoted by Fij, is fji, the composition of more than two linear mappings can be similarly represented by a chain of matrix multiplication. To see this, let r = dim, s = dim, let U = be a basis of U, V = be a basis of V and W = be a basis of W. Then B ⋅ A = M W U ∈ R s × t is the matrix representing g ∘ f, U → W, in other words, the matrix product is the description in coordinates of the composition of linear functions. For inifinite-dimensional vector spaces, one also has the, Tensor product of Hilbert spaces Topological tensor product, the tensor product, outer product and Kronecker product all convey the same general idea
8.
Information
–
In other words, it is the answer to a question of some kind. It is thus related to data and knowledge, as data represents values attributed to parameters, as it regards data, the informations existence is not necessarily coupled to an observer, while in the case of knowledge, the information requires a cognitive observer. At its most fundamental, information is any propagation of cause, Information can be encoded into various forms for transmission and interpretation. It can also be encrypted for safe storage and communication, the uncertainty of an event is measured by its probability of occurrence and is inversely proportional to that. The more uncertain an event, the information is required to resolve uncertainty of that event. The bit is a unit of information, but other units such as the nat may be used. Example, information in one fair coin ﬂip, log2 =1 bit, the concept that information is the message has different meanings in different contexts. The English word was derived from the Latin stem of the nominative. Inform itself comes from the Latin verb informare, which means to give form, eidos can also be associated with thought, proposition, or even concept. The ancient Greek word for information is πληροφορία, which transliterates from πλήρης fully and it literally means fully bears or conveys fully. In modern Greek language the word Πληροφορία is still in use and has the same meaning as the word information in English. In addition to its meaning, the word Πληροφορία as a symbol has deep roots in Aristotles semiotic triangle. In this regard it can be interpreted to communicate information to the one decoding that specific type of sign, from the stance of information theory, information is taken as an ordered sequence of symbols from an alphabet, say an input alphabet χ, and an output alphabet ϒ. Information processing consists of a function that maps any input sequence from χ into an output sequence from ϒ. The mapping may be probabilistic or deterministic and it may have memory or be memoryless. Often information can be viewed as a type of input to an organism or system, inputs are of two kinds, some inputs are important to the function of the organism or system by themselves. In his book Sensory Ecology Dusenbery called these causal inputs, other inputs are important only because they are associated with causal inputs and can be used to predict the occurrence of a causal input at a later time. Some information is important because of association with information but eventually there must be a connection to a causal input
9.
Independence (probability theory)
–
In probability theory, two events are independent, statistically independent, or stochastically independent if the occurrence of one does not affect the probability of occurrence of other. Similarly, two variables are independent if the realization of one does not affect the probability distribution of the other. Two events A and B are independent if their joint probability equals the product of their probabilities, although the derived expressions may seem more intuitive, they are not the preferred definition, as the conditional probabilities may be undefined if P or P are 0. Furthermore, the preferred definition makes clear by symmetry that when A is independent of B, B is also independent of A. A finite set of events is independent if every pair of events is independent—that is, if. A finite set of events is independent if every event is independent of any intersection of the other events—that is, if and only if for every n-element subset. This is called the rule for independent events. Note that it is not a condition involving only the product of all the probabilities of all single events. For more than two events, an independent set of events is pairwise independent, but the converse is not necessarily true. Two random variables X and Y are independent if and only if the elements of the π-system generated by them are independent, that is to say, for every a and b, the events and are independent events. A set of variables is pairwise independent if and only if every pair of random variables is independent. A set of variables is mutually independent if and only if for any finite subset X1, …, X n and any finite sequence of numbers a 1, …, a n. The measure-theoretically inclined may prefer to substitute events for events in the above definition and that definition is exactly equivalent to the one above when the values of the random variables are real numbers. It has the advantage of working also for complex-valued random variables or for random variables taking values in any measurable space. Intuitively, two random variables X and Y are conditionally independent given Z if, once Z is known, for instance, two measurements X and Y of the same underlying quantity Z are not independent, but they are conditionally independent given Z. The formal definition of independence is based on the idea of conditional distributions. If X, Y, and Z are discrete random variables, if X and Y are conditionally independent given Z, then P = P for any x, y and z with P >0. That is, the distribution for X given Y and Z is the same as that given Z alone
10.
Permutation
–
These differ from combinations, which are selections of some members of a set where order is disregarded. For example, written as tuples, there are six permutations of the set, namely and these are all the possible orderings of this three element set. As another example, an anagram of a word, all of whose letters are different, is a permutation of its letters, in this example, the letters are already ordered in the original word and the anagram is a reordering of the letters. The study of permutations of finite sets is a topic in the field of combinatorics, Permutations occur, in more or less prominent ways, in almost every area of mathematics. For similar reasons permutations arise in the study of sorting algorithms in computer science, the number of permutations of n distinct objects is n factorial, usually written as n. which means the product of all positive integers less than or equal to n. In algebra and particularly in group theory, a permutation of a set S is defined as a bijection from S to itself and that is, it is a function from S to S for which every element occurs exactly once as an image value. This is related to the rearrangement of the elements of S in which each element s is replaced by the corresponding f, the collection of such permutations form a group called the symmetric group of S. The key to this structure is the fact that the composition of two permutations results in another rearrangement. Permutations may act on structured objects by rearranging their components, or by certain replacements of symbols, in elementary combinatorics, the k-permutations, or partial permutations, are the ordered arrangements of k distinct elements selected from a set. When k is equal to the size of the set, these are the permutations of the set, fabian Stedman in 1677 described factorials when explaining the number of permutations of bells in change ringing. Starting from two bells, first, two must be admitted to be varied in two ways which he illustrates by showing 12 and 21 and he then explains that with three bells there are three times two figures to be produced out of three which again is illustrated. His explanation involves cast away 3, and 1.2 will remain, cast away 2, and 1.3 will remain, cast away 1, and 2.3 will remain. He then moves on to four bells and repeats the casting away argument showing that there will be four different sets of three, effectively this is an recursive process. He continues with five bells using the casting method and tabulates the resulting 120 combinations. At this point he gives up and remarks, Now the nature of these methods is such, in modern mathematics there are many similar situations in which understanding a problem requires studying certain permutations related to it. There are two equivalent common ways of regarding permutations, sometimes called the active and passive forms, or in older terminology substitutions and permutations, which form is preferable depends on the type of questions being asked in a given discipline. The active way to regard permutations of a set S is to them as the bijections from S to itself. Thus, the permutations are thought of as functions which can be composed with each other, forming groups of permutations
11.
Bijection
–
In mathematical terms, a bijective function f, X → Y is a one-to-one and onto mapping of a set X to a set Y. A bijection from the set X to the set Y has a function from Y to X. If X and Y are finite sets, then the existence of a means they have the same number of elements. For infinite sets the picture is complicated, leading to the concept of cardinal number. A bijective function from a set to itself is called a permutation. Bijective functions are essential to many areas of including the definitions of isomorphism, homeomorphism, diffeomorphism, permutation group. Satisfying properties and means that a bijection is a function with domain X and it is more common to see properties and written as a single statement, Every element of X is paired with exactly one element of Y. Functions which satisfy property are said to be onto Y and are called surjections, Functions which satisfy property are said to be one-to-one functions and are called injections. With this terminology, a bijection is a function which is both a surjection and an injection, or using words, a bijection is a function which is both one-to-one and onto. Consider the batting line-up of a baseball or cricket team, the set X will be the players on the team and the set Y will be the positions in the batting order The pairing is given by which player is in what position in this order. Property is satisfied since each player is somewhere in the list, property is satisfied since no player bats in two positions in the order. Property says that for each position in the order, there is some player batting in that position, in a classroom there are a certain number of seats. A bunch of students enter the room and the instructor asks them all to be seated. After a quick look around the room, the instructor declares that there is a bijection between the set of students and the set of seats, where each student is paired with the seat they are sitting in. The instructor was able to conclude there were just as many seats as there were students. For any set X, the identity function 1X, X → X, the function f, R → R, f = 2x +1 is bijective, since for each y there is a unique x = /2 such that f = y. In more generality, any linear function over the reals, f, R → R, f = ax + b is a bijection, each real number y is obtained from the real number x = /a. The function f, R →, given by f = arctan is bijective since each real x is paired with exactly one angle y in the interval so that tan = x
12.
Cyclic permutation
–
If S has k elements, the cycle is called a k-cycle. On the other hand, the permutation that sends 1 to 3,3 to 1,2 to 4 and 4 to 2 is not a cyclic permutation because it separately permutes the pairs, the set S is called the orbit of the cycle. Every permutation on finitely many elements can be decomposed into a collection of cycles on disjoint orbits, the cyclic parts of a permutation are cycles, thus the second example is composed of a 3-cycle and a 1-cycle and the third is composed of two 2-cycles. A permutation is called a cyclic permutation if and only if it has a single nontrivial cycle, for example, the permutation, written in two-line and also cycle notations, = =, is a six-cycle, its cycle diagram is shown at right. Some authors restrict the definition to only those permutations which consist of one nontrivial cycle, for example, the permutation = = is a cyclic permutation under this more restrictive definition, while the preceding example is not. This notion is most commonly used when X is a set, then of course the largest orbit. Let s 0 be any element of S, and put s i = σ i for any i ∈ Z, if S is finite, there is a minimal number k ≥1 for which s k = s 0. Then S =, and σ is the permutation defined by σ = s i +1 for 0 ≤ i < k and σ = x for any element of X ∖ S. The elements not fixed by σ can be pictured as s 0 ↦ s 1 ↦ s 2 ↦ ⋯ ↦ s k −1 ↦ s k = s 0, a cycle can be written using the compact cycle notation σ =. The length of a cycle is the number of elements of its largest orbit, a cycle of length k is also called a k-cycle. The orbit of a 1-cycle is called a point of the permutation. When cycle notation is used, the 1-cycles are often suppressed when no confusion will result, the number of k-cycles in the symmetric group Sn is given, for 1 ≤ k ≤ n, by the following equivalent formulas. K A k-cycle has signature k −1, a cycle with only two elements is called a transposition. For example the permutation π = that swaps 2 and 4, any permutation can be expressed as the composition of transpositions—formally, they are generators for the group. In fact, when the set being permuted is for some n, then any permutation can be expressed as a product of adjacent transpositions. This follows because an arbitrary transposition can be expressed as the product of adjacent transpositions, instead one may roll the elements keeping a where it is by executing the right factor first. This has moved z to the position of b, so after the first permutation, the transposition, executed thereafter, then addresses z by the index of b to swap what initially were a and z. In fact, the group is a Coxeter group, meaning that it is generated by elements of order 2
13.
Disjoint sets
–
In mathematics, two sets are said to be disjoint if they have no element in common. Equivalently, disjoint sets are sets whose intersection is the empty set, for example, and are disjoint sets, while and are not. This definition of disjoint sets can be extended to any family of sets, a family of sets is pairwise disjoint or mutually disjoint if every two different sets in the family are disjoint. For example, the collection of sets is pairwise disjoint, two sets are said to be almost disjoint sets if their intersection is small in some sense. For instance, two sets whose intersection is a finite set may be said to be almost disjoint. In topology, there are notions of separated sets with more strict conditions than disjointness. For instance, two sets may be considered to be separated when they have disjoint closures or disjoint neighborhoods, similarly, in a metric space, positively separated sets are sets separated by a nonzero distance. Disjointness of two sets, or of a family of sets, may be expressed in terms of their intersections, two sets A and B are disjoint if and only if their intersection A ∩ B is the empty set. It follows from definition that every set is disjoint from the empty set. A family F of sets is pairwise disjoint if, for two sets in the family, their intersection is empty. If the family more than one set, this implies that the intersection of the whole family is also empty. However, a family of one set is pairwise disjoint, regardless of whether that set is empty. Additionally, a family of sets may have an empty intersection without being pairwise disjoint, for instance, the three sets have an empty intersection but are not pairwise disjoint. In fact, there are no two disjoint sets in this collection, also the empty family of sets is pairwise disjoint. A Helly family is a system of sets within which the only subfamilies with empty intersections are the ones that are pairwise disjoint. For instance, the intervals of the real numbers form a Helly family, if a family of closed intervals has an empty intersection and is minimal. A partition of a set X is any collection of mutually disjoint non-empty sets whose union is X, every partition can equivalently be described by an equivalence relation, a binary relation that describes whether two elements belong to the same set in the partition. A disjoint union may mean one of two things, most simply, it may mean the union of sets that are disjoint
14.
Combination
–
In mathematics, a combination is a way of selecting items from a collection, such that the order of selection does not matter. In smaller cases it is possible to count the number of combinations, more formally, a k-combination of a set S is a subset of k distinct elements of S. The set of all k-combinations of a set S is sometimes denoted by, combinations refer to the combination of n things taken k at a time without repetition. To refer to combinations in which repetition is allowed, the terms k-selection, k-multiset, or k-combination with repetition are often used. If, in the example, it was possible to have two of any one kind of fruit there would be 3 more 2-selections, one with two apples, one with two oranges, and one with two pears. Although the set of three fruits was small enough to write a complete list of combinations, with large sets this becomes impractical, for example, a poker hand can be described as a 5-combination of cards from a 52 card deck. The 5 cards of the hand are all distinct, and the order of cards in the hand does not matter, there are 2,598,960 such combinations, and the chance of drawing any one hand at random is 1 /2,598,960. The same number however occurs in other mathematical contexts, where it is denoted by, notably it occurs as a coefficient in the binomial formula. One can define for all natural numbers k at once by the relation n = ∑ k ≥0 X k, binomial coefficients can be computed explicitly in various ways. To get all of them for the expansions up to n, one can use the recursion relation = +, for 0 < k < n, which follows from n = n −1, this leads to the construction of Pascals triangle. For determining an individual binomial coefficient, it is practical to use the formula = n ⋯ k. When k exceeds n/2, the formula contains factors common to the numerator and the denominator. This expresses a symmetry that is evident from the formula, and can also be understood in terms of k-combinations by taking the complement of such a combination. Finally there is a formula which exhibits this symmetry directly, and has the merit of being easy to remember and it is obtained from the previous formula by multiplying denominator and numerator by. So it is inferior as a method of computation to that formula. The last formula can be directly, by considering the n. permutations of all the elements of S. Each such permutation gives a k-combination by selecting its first k elements, =52 ×51 ×50 ×49 ×48 ×47. Another alternative computation, equivalent to the first, is based on writing =1 ×2 ×3 × ⋯ × k, which gives =521 ×512 ×503 ×494 ×485 =2,598,960
15.
Discrete uniform distribution
–
Another way of saying discrete uniform distribution would be a known, finite number of outcomes equally likely to happen. A simple example of the uniform distribution is throwing a fair die. The possible values are 1,2,3,4,5,6, if two dice are thrown and their values added, the resulting distribution is no longer uniform since not all sums have equal probability. The discrete uniform distribution itself is inherently non-parametric and it is convenient, however, to represent its values generally by an integer interval, so that a, b become the main parameters of the distribution. This problem is known as the German tank problem, following the application of maximum estimation to estimates of German tank production during World War II. The UMVU estimator for the maximum is given by N ^ = k +1 k m −1 = m + m k −1 where m is the maximum and k is the sample size. This can be seen as a simple case of maximum spacing estimation. This has a variance of 1 k ≈ N2 k 2 for small samples k ≪ N so a standard deviation of approximately N k, the sample maximum is the maximum likelihood estimator for the population maximum, but, as discussed above, it is biased. If samples are not numbered but are recognizable or markable, one can instead estimate population size via the capture-recapture method, see rencontres numbers for an account of the probability distribution of the number of fixed points of a uniformly distributed random permutation
16.
Event (probability theory)
–
In probability theory, an event is a set of outcomes of an experiment to which a probability is assigned. A single outcome may be an element of different events. An event defines an event, namely the complementary set. Typically, when the space is finite, any subset of the sample space is an event. However, this approach does not work well in cases where the space is uncountably infinite. So, when defining a probability space it is possible, and often necessary, to exclude certain subsets of the sample space from being events. If we assemble a deck of 52 playing cards with no jokers, an event, however, is any subset of the sample space, including any singleton set, the empty set and the sample space itself. Other events are subsets of the sample space that contain multiple elements. So, for example, potential events include, Red and black at the time without being a joker, The 5 of Hearts, A King, A Face card, A Spade, A Face card or a red suit. Since all events are sets, they are written as sets. Defining all subsets of the space as events works well when there are only finitely many outcomes. For many standard probability distributions, such as the normal distribution, attempts to define probabilities for all subsets of the real numbers run into difficulties when one considers badly behaved sets, such as those that are nonmeasurable. Hence, it is necessary to restrict attention to a limited family of subsets. The most natural choice is the Borel measurable set derived from unions and intersections of intervals, however, the larger class of Lebesgue measurable sets proves more useful in practice. In the general description of probability spaces, an event may be defined as an element of a selected σ-algebra of subsets of the sample space. Under this definition, any subset of the space that is not an element of the σ-algebra is not an event. With a reasonable specification of the probability space, however, all events of interest are elements of the σ-algebra, even though events are subsets of some sample space Ω, they are often written as propositional formulas involving random variables. For example, if X is a random variable defined on the sample space Ω
17.
Harmonic number
–
In mathematics, the n-th harmonic number is the sum of the reciprocals of the first n natural numbers, H n =1 +12 +13 + ⋯ +1 n = ∑ k =1 n 1 k. Harmonic numbers are related to the mean in that the n-th harmonic number is also n times the reciprocal of the harmonic mean of the first n positive integers. Harmonic numbers were studied in antiquity and are important in various branches of number theory and they are sometimes loosely termed harmonic series, are closely related to the Riemann zeta function, and appear in the expressions of various special functions. The harmonic numbers roughly approximate the natural function and thus the associated harmonic series grows without limit, albeit slowly. In 1737, Leonhard Euler used the divergence of the series to provide a new proof of the infinity of prime numbers. His work was extended into the plane by Bernhard Riemann in 1859. When the value of a quantity of items has a Zipfs law distribution. This leads to a variety of surprising conclusions in the Long Tail, bertrands postulate entails that, except for the case n =1, the harmonic numbers are never integers. By definition, the harmonic numbers satisfy the recurrence relation H n = H n −1 +1 n, the harmonic numbers are connected to the Stirling numbers of the first kind, H n =1 n. The functions f n = x n n. satisfy the property f n ′ = f n −1, in particular f 1 = x is an integral of the logarithmic function. The harmonic numbers satisfy the series identity ∑ k =1 n H k =, the equality above is straightforward by the simple algebraic identity 1 − x n 1 − x =1 + x + ⋯ + x n −1. The nth harmonic number is about as large as the logarithm of n. The reason is that the sum is approximated by the integral ∫1 n 1 x d x whose value is ln. The values of the sequence Hn - ln decrease monotonically towards the limit lim n → ∞ = γ, a generating function for the harmonic numbers is ∑ n =1 ∞ z n H n = − ln 1 − z, where ln is the natural logarithm. An exponential generating function is ∑ n =1 ∞ z n n, H n = − e z ∑ k =1 ∞1 k k k. = e z Ein where Ein is the exponential integral. Note that Ein = E1 + γ + ln z = Γ + γ + ln z where Γ is the gamma function. The harmonic numbers have several interesting arithmetic properties and it is well-known that H n is an integer if and only if n =1, a result often attributed to Taeisinger
18.
Monotonically decreasing
–
In mathematics, a monotonic function is a function between ordered sets that preserves or reverses the given order. This concept first arose in calculus, and was generalized to the more abstract setting of order theory. In calculus, a function f defined on a subset of the numbers with real values is called monotonic if. That is, as per Fig.1, a function that increases monotonically does not exclusively have to increase, a function is called monotonically increasing, if for all x and y such that x ≤ y one has f ≤ f, so f preserves the order. Likewise, a function is called monotonically decreasing if, whenever x ≤ y, then f ≥ f, if the order ≤ in the definition of monotonicity is replaced by the strict order <, then one obtains a stronger requirement. A function with this property is called strictly increasing, again, by inverting the order symbol, one finds a corresponding concept called strictly decreasing. The terms non-decreasing and non-increasing should not be confused with the negative qualifications not decreasing, for example, the function of figure 3 first falls, then rises, then falls again. It is therefore not decreasing and not increasing, but it is neither non-decreasing nor non-increasing, the term monotonic transformation can also possibly cause some confusion because it refers to a transformation by a strictly increasing function. Notably, this is the case in economics with respect to the properties of a utility function being preserved across a monotonic transform. A function f is said to be absolutely monotonic over an interval if the derivatives of all orders of f are nonnegative or all nonpositive at all points on the interval, F can only have jump discontinuities, f can only have countably many discontinuities in its domain. The discontinuities, however, do not necessarily consist of isolated points and these properties are the reason why monotonic functions are useful in technical work in analysis. In addition, this result cannot be improved to countable, see Cantor function, if f is a monotonic function defined on an interval, then f is Riemann integrable. An important application of functions is in probability theory. If X is a variable, its cumulative distribution function F X = Prob is a monotonically increasing function. A function is unimodal if it is monotonically increasing up to some point, when f is a strictly monotonic function, then f is injective on its domain, and if T is the range of f, then there is an inverse function on T for f. A map f, X → Y is said to be if each of its fibers is connected i. e. for each element y in Y the set f−1 is connected. A subset G of X × X∗ is said to be a set if for every pair. G is said to be monotone if it is maximal among all monotone sets in the sense of set inclusion
19.
Aarhus University
–
Aarhus University is a prestigious public university located in Aarhus, Denmark. Founded in 1928, it is Denmarks second oldest university and the largest, with a total of 44,500 enrolled students as of 1 January 2013, in most prestigious ranking lists of the worlds best universities, Aarhus University is placed in the top 100. The university belongs to the Coimbra Group of European universities, the business school within Aarhus University, called Aarhus BSS, holds the EFMD Equis accreditation, the Association to Advance Collegiate Schools of Business and the Association of MBAs. This makes the school of Aarhus University one of the few in the world to have the so-called Triple Crown accreditations. Aarhus University was founded on 11 September 1928 as Universitetsundervisningen i Jylland with a budget of 33,000 Dkr and an enrollment of 64 students, the university was founded as a response to the increasing number of students at the University of Copenhagen after World War I. Classrooms were rented from the Technical College and the corps consisted of one professor of philosophy and four associate professors of Danish, English, German. In 1929, the municipality of Aarhus gave the university land with a landscape of rolling hills. The design of the university buildings and 12 ha campus area was assigned to architects C. F. Møller, Kay Fisker and Povl Stegmann, who won the architectural competition in 1931. The first buildings housed the Departments of Chemistry, Physics and Anatomy and were opened on 11 September 1933, the construction of the buildings was funded solely by donations which totaled 935,000 Dkr and the buildings covered an area of 4, 190m2. One of the most generous contributors was De Forenede Teglværker i Aarhus led by director K. Nymark. Forenede Teglværker decided to donate 1 million yellow bricks and tiles worth c.50,000 Dkr, on 23 April 1934, Aarhus University was given permission to hold examinations by the king and on 10 October 1935, Professor Dr. phil. Ernst Frandsen was appointed the first rector of the university, since 1939, C. F. Aarhus University had offered courses in basic medical subjects from 1933 and on 10 October 1935 the Faculty of Medicine was formally established. The establishment of a Faculty of Medicine in Aarhus was met some opposition from the Faculty of Medicine at the University of Copenhagen. The professors thought that the state should not establish a new Faculty until the shortcomings of the old one had been solved, in the end, the professors agreed to sign a recommendation for the new Faculty as long as improvements to the old one were not delayed. In 1992, the Faculty of Medicine merged with the dental school, the Committee approved and by declaration of the king on 5 November 1937, the faculty could hold examinations in economics and law. Courses had been offered in theology since 1932 at the Faculty of Humanities, already on 22 June 1928, Reverend Balslev of Aarhus had proposed that Universitetsundervisningen i Aarhus taught basic courses in theology. At this time, Universitetsundervisningen i Aarhus did not have the means to meet these criteria so the case was shelved for the time being. In April 1931, the case reopened, this time by Bishop Skat Hoffmeyer who proposed free teaching in the required subjects, on 5 September 1932 Reverend Asmund held the first lecture in theology
20.
Elwyn Berlekamp
–
Elwyn Ralph Berlekamp is an American mathematician. He is an emeritus of mathematics and EECS at the University of California. Berlekamp is known for his work in coding theory and combinatorial game theory, Berlekamp was born in Dover, Ohio. His family moved to Northern Kentucky, where Berlekamp graduated from Ft. Thomas Highlands high school in Ft. Thomas, Campbell county, while an undergraduate at the Massachusetts Institute of Technology, he was a Putnam Fellow in 1961. He completed his bachelors and masters degrees in engineering in 1962. Continuing his studies at MIT, he finished his Ph. D. in electrical engineering in 1964, his advisors were Robert G. Gallager, Peter Elias, Claude Shannon, and John Wozencraft. Berlekamp taught electrical engineering at the University of California, Berkeley from 1964 until 1966, in 1971, Berlekamp returned to Berkeley as professor of mathematics and EECS, where he served as the advisor for over twenty doctoral students. He is a member of the National Academy of Engineering and the National Academy of Sciences and he was elected a Fellow of the American Academy of Arts and Sciences in 1996, and became a fellow of the American Mathematical Society in 2012. In 1991, he received the IEEE Richard W. Hamming Medal, and in 1993, in 1998, he received a Golden Jubilee Award for Technological Innovation from the IEEE Information Theory Society. He is on the board of directors of Gathering 4 Gardner, in the mid-1980s, he was president of Cyclotomics, Inc. a corporation that developed error-correcting code technology. With John Horton Conway and Richard K. Guy, he co-authored Winning Ways for your Mathematical Plays and he has studied various games, including dots and boxes, Fox and Geese, and, especially, Go. With David Wolfe, Berlekamp co-authored the book Mathematical Go, which describes methods for analyzing certain classes of Go endgames, outside of mathematics and computer science, Berlekamp has also been active in money management. In 1986, he began studies of commodity and financial futures. In 1989, Berlekamp purchased the largest interest in a company named Axcom Trading Advisors. After the firms futures trading algorithms were rewritten, Axcoms Medallion Fund had a return of 55%, net of all management fees, the fund has subsequently continued to realize annualized returns exceeding 30% under management by James Harris Simons and his Renaissance Technologies Corporation. Berlekamp and his wife Jennifer have two daughters and a son and live in Piedmont, California, thesis, Massachusetts Institute of Technology, Dept. of Electrical Engineering,1964. Algebraic Coding Theory, New York, McGraw-Hill,1968, revised ed. Aegean Park Press,1984, ISBN 0-89412-063-8. Winning Ways for your Mathematical Plays, 1st edition, New York, Academic Press,2 vols
21.
Mathematical Sciences Research Institute
–
It is widely regarded as a world leading mathematical center for collaborative research, drawing thousands of visiting researchers from around the world each year. The Institute is located at 17 Gauss Way, on the University of California, Berkeley campus, close to Grizzly Peak, MSRI was founded in 1982 by Shiing-Shen Chern, Calvin Moore, and Isadore M. Singer. Researchers—some 2000 per year—come to MSRI to work in an environment that promotes creativity, the Institute’s prize-winning forty-eight thousand square foot building enjoys spectacular views of the San Francisco Bay. After 30 years of activity, the reputation of the Institute is such that mathematicians make it a priority to participate in the Institute’s programs. MSRI also serves a community through the development of human scientific capital, providing postdoctoral training to young scientists. The Institute also advances the education of people with conferences on critical issues in mathematics education. Additionally, they host research workshops that are unconnected to the main programs, during the summer, workshops for graduate students are held through the MSRI-UP program. MSRI sponsors programs for middle and high school students and their teachers as part of the Math Circles and Circles for Teachers that meet weekly in San Francisco, Berkeley, and Oakland. It also sponsors the Bay Area Mathematical Olympiad, the Julia Robinson Mathematics Festival, because of its contribution the nation’s scientific potential, MSRI’s activity is supported by the National Science Foundation and the National Security Agency. Private individuals, foundations, and nearly 100 Academic Sponsor Institutions, including the top departments in the United States, provide crucial support. James Simons, founder of Renaissance Technologies and a University of California, furthermore, the lectures given at MSRI events are videotaped and made available for free on the internet. MSRI has sponsored events that reach out to the non-mathematical public. Its Simons Auditorium hosts special performances of classical music and it also created a series of mathematical puzzles that were posted among the advertising placards on San Francisco Muni buses. Mathematical Sciences Research Institute MSRI streaming lectures NSF Math Institutes
22.
Read-only memory
–
Read-only memory is a type of non-volatile memory used in computers and other electronic devices. Data stored in ROM can only be modified slowly, with difficulty, or not at all, strictly, read-only memory refers to memory that is hard-wired, such as diode matrix and the later mask ROM, which cannot be changed after manufacture. Although discrete circuits can be altered in principle, integrated circuits cannot and that such memory can never be changed is a disadvantage in many applications, as bugs and security issues cannot be fixed, and new features cannot be added. More recently, ROM has come to include memory that is read-only in normal operation, the simplest type of solid-state ROM is as old as the semiconductor technology itself. Combinational logic gates can be joined manually to map n-bit address input onto arbitrary values of m-bit data output, with the invention of the integrated circuit came mask ROM. In mask ROM, the data is encoded in the circuit. This leads to a number of disadvantages, It is only economical to buy mask ROM in large quantities. The turnaround time between completing the design for a mask ROM and receiving the finished product is long, for the same reason, mask ROM is impractical for R&D work since designers frequently need to modify the contents of memory as they refine a design. If a product is shipped with faulty mask ROM, the way to fix it is to recall the product. Subsequent developments have addressed these shortcomings, PROM, invented in 1956, allowed users to program its contents exactly once by physically altering its structure with the application of high-voltage pulses. This addressed problems 1 and 2 above, since a company can order a large batch of fresh PROM chips. The 1971 invention of EPROM essentially solved problem 3, since EPROM can be reset to its unprogrammed state by exposure to strong ultraviolet light. All of these technologies improved the flexibility of ROM, but at a significant cost-per-chip, rewriteable technologies were envisioned as replacements for mask ROM. The most recent development is NAND flash, also invented at Toshiba, as of 2007, NAND has partially achieved this goal by offering throughput comparable to hard disks, higher tolerance of physical shock, extreme miniaturization, and much lower power consumption. Every stored-program computer may use a form of storage to store the initial program that runs when the computer is powered on or otherwise begins execution. Likewise, every non-trivial computer needs some form of memory to record changes in its state as it executes. Forms of read-only memory were employed as non-volatile storage for programs in most early stored-program computers, consequently, ROM could be implemented at a lower cost-per-bit than RAM for many years. Most home computers of the 1980s stored a BASIC interpreter or operating system in ROM as other forms of storage such as magnetic disk drives were too costly
23.
Sign (mathematics)
–
In mathematics, the concept of sign originates from the property of every non-zero real number of being positive or negative. Zero itself is signless, although in some contexts it makes sense to consider a signed zero, along with its application to real numbers, change of sign is used throughout mathematics and physics to denote the additive inverse, even for quantities which are not real numbers. Also, the sign can indicate aspects of mathematical objects that resemble positivity and negativity. A real number is said to be if its value is greater than zero. The attribute of being positive or negative is called the sign of the number, zero itself is not considered to have a sign. Also, signs are not defined for complex numbers, although the argument generalizes it in some sense, in common numeral notation, the sign of a number is often denoted by placing a plus sign or a minus sign before the number. For example, +3 denotes positive three, and −3 denotes negative three, when no plus or minus sign is given, the default interpretation is that a number is positive. Because of this notation, as well as the definition of numbers through subtraction. In this context, it makes sense to write − = +3, any non-zero number can be changed to a positive one using the absolute value function. For example, the value of −3 and the absolute value of 3 are both equal to 3. In symbols, this would be written |−3| =3 and |3| =3, the number zero is neither positive nor negative, and therefore has no sign. In arithmetic, +0 and −0 both denote the same number 0, which is the inverse of itself. Note that this definition is culturally determined, in France and Belgium,0 is said to be both positive and negative. The positive resp. negative numbers without zero are said to be strictly positive resp, in some contexts, such as signed number representations in computing, it makes sense to consider signed versions of zero, with positive zero and negative zero being different numbers. One also sees +0 and −0 in calculus and mathematical analysis when evaluating one-sided limits and this notation refers to the behaviour of a function as the input variable approaches 0 from positive or negative values respectively, these behaviours are not necessarily the same. Because zero is positive nor negative, the following phrases are sometimes used to refer to the sign of an unknown number. A number is negative if it is less than zero, a number is non-negative if it is greater than or equal to zero. A number is non-positive if it is less than or equal to zero, thus a non-negative number is either positive or zero, while a non-positive number is either negative or zero
24.
Scientific American
–
Scientific American is an American popular science magazine. Many famous scientists, including Albert Einstein, have contributed articles in the past 170 years and it is the oldest continuously published monthly magazine in the United States. Scientific American was founded by inventor and publisher Rufus M. Porter in 1845 as a weekly newspaper. Throughout its early years, much emphasis was placed on reports of what was going on at the U. S, current issues include a this date in history section, featuring excerpts from articles originally published 50,100, and 150 years earlier. Topics include humorous incidents, wrong-headed theories, and noteworthy advances in the history of science, Porter sold the publication to Alfred Ely Beach and Orson Desaix Munn a mere ten months after founding it. Until 1948, it remained owned by Munn & Company, under Munns grandson, Orson Desaix Munn III, it had evolved into something of a workbench publication, similar to the twentieth-century incarnation of Popular Science. In the years after World War II, the fell into decline. Thus the partners—publisher Gerard Piel, editor Dennis Flanagan, and general manager Donald H. Miller, Miller retired in 1979, Flanagan and Piel in 1984, when Gerard Piels son Jonathan became president and editor, circulation had grown fifteen-fold since 1948. In 1986, it was sold to the Holtzbrinck group of Germany, in the fall of 2008, Scientific American was put under the control of Nature Publishing Group, a division of Holtzbrinck. Donald Miller died in December 1998, Gerard Piel in September 2004, Mariette DiChristina is the current editor-in-chief, after John Rennie stepped down in June 2009. Scientific American published its first foreign edition in 1890, the Spanish-language La America Cientifica, a Russian edition V Mire Nauki was launched in the Soviet Union in 1983, and continues in the present-day Russian Federation. Kexue, a simplified Chinese edition launched in 1979, was the first Western magazine published in the Peoples Republic of China, founded in Chongqing, the simplified Chinese magazine was transferred to Beijing in 2001. Later in 2005, an edition, Global Science, was published instead of Kexue. A traditional Chinese edition, known as 科學人, was introduced to Taiwan in 2002, the Hungarian edition Tudomány existed between 1984 and 1992. In 1986, an Arabic edition, Oloom magazine, was published, in 2002, a Portuguese edition was launched in Brazil. From 1902 to 1911, Scientific American supervised the publication of the Encyclopedia Americana and it originally styled itself The Advocate of Industry and Enterprise and Journal of Mechanical and other Improvements. On the front page of the first issue was the engraving of Improved Rail-Road Cars, the masthead had a commentary as follows, Scientific American published every Thursday morning at No.11 Spruce Street, New York, No.16 State Street, Boston, and No. 2l Arcade Philadelphia, by Rufus Porter, five copies will be sent to one address six months for four dollars in advance
25.
Richard P. Stanley
–
Richard Peter Stanley is the Norman Levinson Professor of Applied Mathematics at the Massachusetts Institute of Technology, in Cambridge, Massachusetts. He received his Ph. D. at Harvard University in 1971 under the supervision of Gian-Carlo Rota and he is an expert in the field of combinatorics and its applications to other mathematical disciplines. Stanley is known for his two-volume book Enumerative Combinatorics and he is also the author of Combinatorics and Commutative Algebra and well over 100 research articles in mathematics. He has served as advisor to more than 58 doctoral students. Stanleys distinctions include membership in the National Academy of Sciences, the 2001 Leroy P. Stanley, Richard P. Combinatorics and Commutative Algebra, Stanley, Richard P. Enumerative Combinatorics, Volumes 1 and 2. Stanley decomposition Stanleys reciprocity theorem Exponential formula Richard Stanleys Homepage
26.
Quantum complexity theory
–
Quantum complexity theory is a part of computational complexity theory in theoretical computer science. It studies complexity classes defined using quantum computers and quantum information which are models based on quantum mechanics. It studies the hardness of problems in relation to these complexity classes, a complexity class is a collection of problems which can be solved by some computational model under resource constraints. For instance, the complexity class P is defined to be the set of problems solvable by a Turing machine in polynomial time, similarly, one may define a quantum complexity class using a quantum model of computation, such as a standard quantum computer or a quantum Turing machine. Thus, the complexity class BQP is defined to be the set of problems solvable by a computer in polynomial time with bounded error. Two important quantum complexity classes are BQP and QMA which are the quantum analogues of P. One of the aims of quantum complexity theory is to find out where these classes lie with respect to classical complexity classes such as P, NP, PP, PSPACE. In the query complexity model, the input is given as an oracle, the algorithm gets information about the input only by querying the oracle. The algorithm starts in some fixed quantum state and the state evolves as it queries the oracle, Quantum Query Complexity is the smallest number of queries to the oracle that are required in order to calculate the function. Clearly, Quantum Query Complexity is a bound on the overall time complexity of a function. An example depicting the power of Quantum Computing is Grovers algorithm for searching unstructured databases and its Quantum Query Complexity is O which is quadratically better than the best possible classical query complexity. Complexity Zoo is a place to read more about Quantum Complexity Theory
27.
Monty Hall problem
–
The Monty Hall problem is a brain teaser, in the form of a probability puzzle, loosely based on the American television game show Lets Make a Deal and named after its original host, Monty Hall. The problem was posed in a letter by Steve Selvin to the American Statistician in 1975. 1, and the host, who knows whats behind the doors, opens another door and he then says to you, Do you want to pick door No.2. Is it to advantage to switch your choice. Vos Savants response was that the contestant should switch to the other door, under the standard assumptions, contestants who switch have a 2/3 chance of winning the car, while contestants who stick to their initial choice have only a 1/3 chance. The given probabilities depend on assumptions about how the host. Another insight is that switching doors is a different action than choosing between the two remaining doors at random, as the first action uses the information and the latter does not. Other possible behaviors than the one described can reveal different additional information, or none at all, many readers of vos Savants column refused to believe switching is beneficial despite her explanation. After the problem appeared in Parade, approximately 10,000 readers, including nearly 1,000 with PhDs, wrote to the magazine, even when given explanations, simulations, and formal mathematical proofs, many people still do not accept that switching is the best strategy. Paul Erdős, one of the most prolific mathematicians in history, the problem is a paradox of the veridical type, because the correct result is so counterintuitive it can seem absurd, but is nevertheless demonstrably true. The Monty Hall problem is closely related to the earlier Three Prisoners problem. Steve Selvin wrote a letter to the American Statistician in 1975 describing a problem based on the game show Lets Make a Deal. 1, and the host, who knows whats behind the doors, opens another door and he then says to you, Do you want to pick door No.2. Is it to advantage to switch your choice. The behavior of the host is key to the 2/3 solution, ambiguities in the Parade version do not explicitly define the protocol of the host. The host must always open a door to reveal a goat, the host must always offer the chance to switch between the originally chosen door and the remaining closed door. When any of these assumptions is varied, it can change the probability of winning by switching doors as detailed in the section below. It is also presumed that the car is initially hidden behind a random door
28.
Prisoner's dilemma
–
It was originally framed by Merrill Flood and Melvin Dresher working at RAND in 1950. Albert W. Tucker formalized the game with prison sentence rewards and named it, prisoners dilemma, presenting it as follows, Each prisoner is in solitary confinement with no means of communicating with the other. The prosecutors lack sufficient evidence to convict the pair on the principal charge and they hope to get both sentenced to a year in prison on a lesser charge. Simultaneously, the prosecutors offer each prisoner a bargain, Each prisoner is given the opportunity either to, betray the other by testifying that the other committed the crime, or to cooperate with the other by remaining silent. The interesting part of this result is that pursuing individual reward logically leads both of the prisoners to betray, when they would get a reward if they both kept silent. In reality, humans display a systemic bias towards cooperative behavior in this and similar games, much more so than predicted by simple models of rational self-interested action. If the number of times the game will be played is known to the players, in an infinite or unknown length game there is no fixed optimum strategy, and Prisoners Dilemma tournaments have been held to compete and test algorithms. The prisoners dilemma game can be used as a model for many real world situations involving cooperative behaviour, both cannot communicate, they are separated in two individual rooms. Regardless of what the other decides, each gets a higher reward by betraying the other. The reasoning involves an argument by dilemma, B will either cooperate or defect, if B cooperates, A should defect, because going free is better than serving 1 year. If B defects, A should also defect, because serving 2 years is better than serving 3, so either way, A should defect. Parallel reasoning will show that B should defect, because defection always results in a better payoff than cooperation, regardless of the other players choice, it is a dominant strategy. Mutual defection is the only strong Nash equilibrium in the game, the structure of the traditional Prisoners Dilemma can be generalized from its original prisoner setting. Suppose that the two players are represented by the colors, red and blue, and that player chooses to either Cooperate or Defect. If both players cooperate, they receive the reward R for cooperating. If both players defect, they receive the punishment payoff P. The donation game is a form of prisoners dilemma in which corresponds to offering the other player a benefit b at a personal cost c with b > c. The payoff matrix is thus Note that 2R>T+S which qualifies the donation game to be an iterated game, the donation game may be applied to markets
29.
International Standard Book Number
–
The International Standard Book Number is a unique numeric commercial book identifier. An ISBN is assigned to each edition and variation of a book, for example, an e-book, a paperback and a hardcover edition of the same book would each have a different ISBN. The ISBN is 13 digits long if assigned on or after 1 January 2007, the method of assigning an ISBN is nation-based and varies from country to country, often depending on how large the publishing industry is within a country. The initial ISBN configuration of recognition was generated in 1967 based upon the 9-digit Standard Book Numbering created in 1966, the 10-digit ISBN format was developed by the International Organization for Standardization and was published in 1970 as international standard ISO2108. Occasionally, a book may appear without a printed ISBN if it is printed privately or the author does not follow the usual ISBN procedure, however, this can be rectified later. Another identifier, the International Standard Serial Number, identifies periodical publications such as magazines, the ISBN configuration of recognition was generated in 1967 in the United Kingdom by David Whitaker and in 1968 in the US by Emery Koltay. The 10-digit ISBN format was developed by the International Organization for Standardization and was published in 1970 as international standard ISO2108, the United Kingdom continued to use the 9-digit SBN code until 1974. The ISO on-line facility only refers back to 1978, an SBN may be converted to an ISBN by prefixing the digit 0. For example, the edition of Mr. J. G. Reeder Returns, published by Hodder in 1965, has SBN340013818 -340 indicating the publisher,01381 their serial number. This can be converted to ISBN 0-340-01381-8, the check digit does not need to be re-calculated, since 1 January 2007, ISBNs have contained 13 digits, a format that is compatible with Bookland European Article Number EAN-13s. An ISBN is assigned to each edition and variation of a book, for example, an ebook, a paperback, and a hardcover edition of the same book would each have a different ISBN. The ISBN is 13 digits long if assigned on or after 1 January 2007, a 13-digit ISBN can be separated into its parts, and when this is done it is customary to separate the parts with hyphens or spaces. Separating the parts of a 10-digit ISBN is also done with either hyphens or spaces, figuring out how to correctly separate a given ISBN number is complicated, because most of the parts do not use a fixed number of digits. ISBN issuance is country-specific, in that ISBNs are issued by the ISBN registration agency that is responsible for country or territory regardless of the publication language. Some ISBN registration agencies are based in national libraries or within ministries of culture, in other cases, the ISBN registration service is provided by organisations such as bibliographic data providers that are not government funded. In Canada, ISBNs are issued at no cost with the purpose of encouraging Canadian culture. In the United Kingdom, United States, and some countries, where the service is provided by non-government-funded organisations. Australia, ISBNs are issued by the library services agency Thorpe-Bowker
30.
MinutePhysics
–
MinutePhysics is an educational YouTube channel created by Henry Reich. The channels videos include time-lapsed drawing to explain physics-related topics in approximately one minute, as of December 2016, the channel has over 3.64 million subscribers. Videos from MinutePhysics have been featured on PBS NewsHour, Huffington Post, NBC, MinutePhysics is also a channel that is able to be viewed through YouTube EDU. The most popular MinutePhysics video, with over 10.5 million views, is the one explaining the consequences when an unstoppable force meets an immovable object, another popular MinutePhysics video features Reich explaining why pink is not actually a color. Reich has also uploaded a series of three videos explaining the Higgs Boson, the channel also collaborated with physicist Sean M. Carroll in a five-part video series on time and entropy. MinutePhysics is also available to download as a podcast on iTunes, brady Haran for the University of Nottingham
31.
YouTube
–
YouTube is an American video-sharing website headquartered in San Bruno, California. The service was created by three former PayPal employees—Chad Hurley, Steve Chen, and Jawed Karim—in February 2005, Google bought the site in November 2006 for US$1.65 billion, YouTube now operates as one of Googles subsidiaries. Unregistered users can watch videos on the site, while registered users are permitted to upload an unlimited number of videos. Videos deemed potentially offensive are available only to registered users affirming themselves to be at least 18 years old, YouTube earns advertising revenue from Google AdSense, a program which targets ads according to site content and audience. YouTube was founded by Chad Hurley, Steve Chen, and Jawed Karim, Hurley had studied design at Indiana University of Pennsylvania, and Chen and Karim studied computer science together at the University of Illinois at Urbana-Champaign. Karim could not easily find video clips of either event online, Hurley and Chen said that the original idea for YouTube was a video version of an online dating service, and had been influenced by the website Hot or Not. YouTube began as a venture capital-funded technology startup, primarily from an $11.5 million investment by Sequoia Capital between November 2005 and April 2006, YouTubes early headquarters were situated above a pizzeria and Japanese restaurant in San Mateo, California. The domain name www. youtube. com was activated on February 14,2005, the first YouTube video, titled Me at the zoo, shows co-founder Jawed Karim at the San Diego Zoo. The video was uploaded on April 23,2005, and can still be viewed on the site, YouTube offered the public a beta test of the site in May 2005. The first video to reach one million views was a Nike advertisement featuring Ronaldinho in November 2005. Following a $3.5 million investment from Sequoia Capital in November, the site grew rapidly, and in July 2006 the company announced that more than 65,000 new videos were being uploaded every day, and that the site was receiving 100 million video views per day. The site has 800 million unique users a month and it is estimated that in 2007 YouTube consumed as much bandwidth as the entire Internet in 2000. The choice of the name www. youtube. com led to problems for a similarly named website, the sites owner, Universal Tube & Rollform Equipment, filed a lawsuit against YouTube in November 2006 after being regularly overloaded by people looking for YouTube. Universal Tube has since changed the name of its website to www. utubeonline. com, in October 2006, Google Inc. announced that it had acquired YouTube for $1.65 billion in Google stock, and the deal was finalized on November 13,2006. In March 2010, YouTube began free streaming of certain content, according to YouTube, this was the first worldwide free online broadcast of a major sporting event. On March 31,2010, the YouTube website launched a new design, with the aim of simplifying the interface, Google product manager Shiva Rajaraman commented, We really felt like we needed to step back and remove the clutter. In May 2010, YouTube videos were watched more than two times per day. This increased to three billion in May 2011, and four billion in January 2012, in February 2017, one billion hours of YouTube was watched every day