1.
Probability theory
–
Probability theory is the branch of mathematics concerned with probability, the analysis of random phenomena. It is not possible to predict precisely results of random events. Two mathematical results describing such patterns are the law of large numbers and the central limit theorem. As a mathematical foundation for statistics, theory is essential to many human activities that involve quantitative analysis of large sets of data. Methods of theory also apply to descriptions of complex systems given only partial knowledge of their state, as in statistical mechanics. A great discovery of twentieth century physics was the probabilistic nature at atomic scales described in quantum mechanics. In the 19th century, Pierre Laplace completed what is today considered the classic interpretation. Initially, its methods were mainly combinatorial. Eventually, analytical considerations compelled the incorporation of continuous variables into the theory. This culminated on foundations laid by Andrey Nikolaevich Kolmogorov. Kolmogorov combined the notion of sample space, presented his axiom system for probability theory in 1933. Most introductions to theory treat discrete probability distributions and continuous probability distributions separately. The more mathematically advanced measure theory-based treatment of probability covers the discrete, continuous, more. Consider an experiment that can produce a number of outcomes. The set of all outcomes is called the space of the experiment.
Probability theory
–
The normal distribution, a continuous probability distribution.
Probability theory
–
The Poisson distribution, a discrete probability distribution.
2.
Glossary of probability and statistics
–
The following is a glossary of terms used in the mathematical sciences statistics and probability. Alternative hypothesis atomic event Another name for elementary bar chart bias 1. A sample, not representative of the population 2. For example, how will my headache feel if I take aspirin, versus if I do not take aspirin? Causal studies may be either observational. This is different from the mean, which can be measured directly. Level Also known as a confidence coefficient, the confidence level indicates the probability that the confidence interval captures the true population mean. For example, a interval with a 95 percent confidence level has a 95 percent chance of capturing the population mean. This means that, if the experiment were repeated many times, 95 percent of the CIs would contain the true population mean. Variable correlation Also called correlation coefficient, a numeric measure of the strength of linear relationship between two random variables. An example is the Pearson product-moment coefficient, found by dividing the covariance of the two variables by the product of their standard deviations. The mean can be used as an expected value The sum of the probability of each possible outcome of the experiment multiplied by its payoff. Thus, it represents the average amount one "expects" to win per bet if bets with identical odds are repeated many times. For example, the expected value of a die roll is 3.5. The concept is similar to the mean.
Glossary of probability and statistics
–
Statistics
3.
Notation in probability and statistics
–
Probability theory and statistics have some commonly used conventions, in addition to standard mathematical notation and mathematical symbols. Random variables are usually written in upper case roman letters: X, Y, etc. Particular realizations of a random variable are written in corresponding lower case letters. P or P indicates the probability that events A and B both occur. P or P indicates the probability of either event B occurring. σ-algebras are usually written with upper case calligraphic Probability density functions and probability mass functions are denoted by lower case letters, e.g. f. Cumulative distribution functions are denoted by upper case letters, e.g. F. Greek letters are commonly used to denote unknown parameters. A tilde denotes "has the probability distribution of". Placing a hat, or caret, over a true parameter denotes an estimator of it, e.g. θ ^ is an estimator for θ. Column vectors are usually denoted by boldface lower case letters, e.g. x. The transpose operator is denoted by either a superscript T or a prime symbol. A vector is written as the transpose of a column vector, x ′. Common abbreviations include: a.e. almost everywhere a.s. almost surely cdf cumulative distribution function cmf cumulative mass function df degrees of freedom i.i.d. COPSS Committee on Symbols and Notation", The American Statistician, 19: 12–14, doi:10.2307/2681417, JSTOR 2681417 Earliest Uses of Symbols in Probability and Statistics, maintained by Jeff Miller.
Notation in probability and statistics
–
Statistics
4.
Belief
–
Another way of defining belief sees it towards the likelihood of something being true. In the context of Greek thought, two related concepts were identified with regards to the concept of belief: pistis and doxa. Simplified, we may say that pistis refers to "trust" and "confidence", while doxa refers to "opinion" and "acceptance". The English word "orthodoxy" derives from doxa. Jonathan Leicester suggests that belief has the purpose of guiding action rather than indicating truth. In epistemology, philosophers use the term "belief" to refer to personal attitudes associated with true or false concepts. However, "belief" does not require active circumspection. For example, we never ponder whether or not the sun will rise. We simply assume the sun will rise. Epistemology is involved generally with a theoretical philosophical study of knowledge. The primary problem in epistemology is to understand exactly what is needed in order for us to have knowledge. Among American epistemologists, Goldman, have questioned the "justified true belief" definition, challenged the "sophists" of their time. Much of the work examining the viability of the belief concept stems from philosophical analysis. The concept of belief presumes an object of belief. Beliefs are sometimes divided into dispositional beliefs.
Belief
–
We are influenced by many factors that ripple through our minds as our beliefs form, evolve, and may eventually change
Belief
–
A Venn / Euler diagram which grants that truth and belief may be distinguished and that their intersection is knowledge. Unsurprisingly, this is a controversial analysis.
Belief
–
This article is about the general concept. For other uses, see Belief (disambiguation).
Belief
–
Philosopher Jonathan Glover warns that belief systems are like whole boats in the water; it is extremely difficult to alter them all at once (e.g., it may be too stressful, or people may maintain their biases without realizing it).
5.
Doubt
–
Doubt characterises a status in which the mind remains suspended between two contradictory propositions and unable to assent to either of them. Doubt on an emotional level is indecision between disbelief. Doubt involves uncertainty, lack of sureness of an alleged fact, an action, a motive, or a decision. Doubt may involve delaying or rejecting relevant action out of concerns for mistakes or faults or appropriateness.. . Doubt sometimes tends to call on reason. Doubt may encourage people to apply more rigorous methods. Doubt may have particular importance as leading towards non-acceptance. Societally, doubt creates an atmosphere of distrust, being accusatory in nature and facto alleging either foolishness or deceit on the part of another. Such a stance has been fostered in Western European society in opposition to tradition and authority. Psychoanalytic theory attributes doubt to childhood, when the ego develops. Childhood experiences, these theories maintain, can plant doubt one's abilities and even about one's very identity. Mental as well as more spiritual approaches abound in response to the wide variety of potential causes for doubt. Behavioral therapy -- in which a person systematically asks his own mind if the doubt has any real basis -- uses Socratic methods. This method contrasts to those of the Buddhist faith, which involve a more esoteric approach to doubt and inaction.
Doubt
–
The Incredulity of Saint Thomas by Caravaggio.
Doubt
–
Doubt
Doubt
–
Doubts, by Henrietta Rae, 1886
6.
Determinism
–
Determinism is the philosophical position that for every event there exist conditions that could cause no other event. "There are many determinisms, depending on what pre-conditions are considered to be determinative of an action." Deterministic theories throughout the history of philosophy have sprung from considerations. Some forms of determinism can be empirically tested with the philosophy of physics. The opposite of determinism is some kind of indeterminism. Determinism is often contrasted with free will. Determinism often is taken to mean causal determinism, which in physics is known as cause-and-effect. This meaning can be distinguished from other varieties of determinism mentioned below. Historical debates involve many philosophical positions and varieties of determinism. They include debates concerning determinism and free will, technically denoted as incompatibilistic. Determinism should not be confused by reasons, motives, desires. Determinism rarely requires that perfect prediction be practically possible. Below are some of the more common viewpoints confused with "determinism". Causal determinism is "the idea that every event is necessitated by antecedent conditions together with the laws of nature". Causal determinism proposes that there is an unbroken chain of prior occurrences stretching back to the origin of the universe.
Determinism
–
Many philosophical theories of determinism frame themselves with the idea that reality follows a sort of predetermined path
Determinism
–
Adequate determinism focuses on the fact that, even without a full understanding of microscopic physics, we can predict the distribution of 1000 coin tosses
Determinism
–
Nature and nurture interact in humans. A scientist looking at a sculpture after some time does not ask whether we are seeing the effects of the starting materials or of environmental influences.
Determinism
–
A technological determinist might suggest that technology like the mobile phone is the greatest factor shaping human civilization.
7.
Fatalism
–
Fatalism is a philosophical doctrine stressing the subjugation of all events or actions to fate. Fatalism generally refers to any of the following ideas: The view that we are powerless to do anything other than what we actually do. Included in this is that man has no power to influence the future, or indeed, his own actions. This belief is very similar to predeterminism. An attitude of resignation in the face of some future event or events which are thought to be inevitable. Friedrich Nietzsche named this idea in his book The Wanderer and His Shadow. That acceptance is appropriate, rather than resistance against inevitability. This belief is very similar to defeatism. Ājīvika was an ascetic movement of the Mahajanapada period in the Indian subcontinent. The same sources therefore make them out to be strict fatalists, who did not believe in karma. "If all future occurrences are rigidly determined... coming events may in some sense be said to exist already. Both exist in the past. Time is thus on ultimate illusory". "Every phase of a process is always present. ... in a soul which has attained salvation its earthly births are still present.
Fatalism
–
Time Portal
8.
Hypothesis
–
A hypothesis is a proposed explanation for a phenomenon. For a hypothesis to be a scientific hypothesis, the scientific method requires that one can test it. Scientists generally base scientific hypotheses on previous observations that cannot satisfactorily be explained with the available scientific theories. Even though the words "hypothesis" and "theory" are often used synonymously, a scientific hypothesis is not the same as a scientific theory. A working hypothesis is a provisionally accepted hypothesis proposed for further research. P is the assumption in a What If question. Remember, the way that you prove an implication is by assuming the hypothesis. --Philip Wadler In its ancient usage, hypothesis referred to a summary of the plot of a classical drama. The English word hypothesis comes from the Greek word hupothesis, meaning "to put under" or "to suppose". In Plato's Meno, Socrates dissects virtue with a method used by mathematicians, that of "investigating from a hypothesis." In this sense, ` hypothesis' refers to a mathematical approach that simplifies cumbersome calculations. In common usage in the 21st century, a hypothesis refers to a provisional idea whose merit requires evaluation. For proper evaluation, the framer of a hypothesis needs to define specifics in operational terms. A hypothesis requires more work by the researcher in order to either confirm or disprove it. In due course, a confirmed hypothesis may become part of a theory or occasionally may grow to become a theory itself.
Hypothesis
–
Andreas Cellarius hypothesis, demonstrating the planetary motions in eccentric and epicyclical orbits
9.
Nihilism
–
Nihilism is a philosophical doctrine that suggests the lack of belief in one or more reputedly meaningful aspects of life. Most commonly, nihilism is presented in the form of existential nihilism, which argues that life is without objective meaning, intrinsic value. Moral nihilists assert that any established moral values are abstractly contrived. Nihilism can also take epistemological, metaphysical forms, meaning respectively that, in some aspect, knowledge is not possible, or that reality does not actually exist. Movements such as deconstruction, among others, have been identified by commentators as "nihilistic". Nihilism thus can describe multiple arguably independent philosophical positions. Metaphysical nihilism is commonly defined as the belief that nothing exists as a correspondent component of the self-efficient world. The American Heritage Medical Dictionary defines one form of nihilism as "an extreme form of skepticism that denies all existence." A similar skepticism concerning the concrete world can be found in solipsism. Both these positions are considered forms of anti-realism. Epistemological nihilism is a form of skepticism in which all knowledge is accepted as being unable to be confirmed true. This interpretation of existence must be based on resolution. Therefore, there is no arguable way to measure the validity of mereological nihilism. Existential nihilism is the belief that life has value. The meaninglessness of life is largely explored in the philosophical school of existentialism.
Nihilism
–
Søren Aabye Kierkegaard
Nihilism
–
Friedrich Wilhelm Nietzsche
10.
Scientific theory
–
Scientific theories are the most reliable, rigorous, comprehensive form of scientific knowledge. The strength of a scientific theory is related to its simplicity. In certain cases, the scientific theory can still be treated as a theory if it is useful as an approximation under specific conditions. Scientific theories are testable and make falsifiable predictions. Scientists use theories as a foundation to gain scientific knowledge, well as to accomplish goals such as inventing technology or curing disease. Science historian Stephen Jay Gould said, ``... facts and theories are different things, not rungs in a hierarchy of increasing certainty. Facts are the world′s data. Theories are structures of ideas that explain and interpret facts.” The defining characteristic including theories, is the ability to make testable predictions. The relevance and specificity of those predictions determine how potentially useful the theory is. A would-be theory that makes no observable predictions is not a scientific theory at all. Predictions sufficiently specific to be tested are not useful. In both cases, the term "theory" is not applicable. It is well-supported by many independent strands of evidence, rather than a single foundation. This ensures that it is probably a good approximation, if not completely correct.
Scientific theory
–
A central prediction from a current theory: the general theory of relativity predicts the bending of light in a gravitational field. This prediction was first tested during the solar eclipse of May 1919.
Scientific theory
–
The first observation of cells, by Robert Hooke, using an early microscope. This led to the development of cell theory.
Scientific theory
–
Precession of the perihelion of Mercury (exaggerated). The deviation in Mercury's position from the Newtonian prediction is about 43 arc-seconds (about two-thirds of 1/60 of a degree) per century.
Scientific theory
–
Planets of the Solar System, with the Sun at the center. (Sizes to scale; distances and illumination not to scale.)
11.
Solipsism
–
Solipsism is the philosophical idea that only one's own mind is sure to exist. As a metaphysical position, solipsism goes further to the conclusion that the world and other minds do not exist. There are varying degrees of solipsism that parallel the varying degrees of serious skepticism: Metaphysical solipsism is a variety of solipsism. There are weaker versions of metaphysical solipsism, such as Caspar Hare's egocentric presentism, in which other people are conscious but their experiences are simply not present. Epistemological solipsism is the variety of idealism according to which only the directly accessible mental contents of the solipsistic philosopher can be known. The existence of an external world is regarded as an unresolvable question rather than actually false. Logically, this does not assure that the moon itself existed at the time the photograph is supposed to have been taken. To establish that it is an image of an independent moon requires many other assumptions that amount to begging the question. Methodological solipsism is an agnostic variant of solipsism. It exists in opposition to the strict epistemological requirements for "Knowledge". It still entertains the points that any induction is fallible and that we may be brains in vats. Only the existence of thoughts is known for certain. Importantly, methodological solipsists do not intend to conclude that the stronger forms of solipsism are actually true. They simply emphasize that justifications of an external world must be founded on indisputable facts about their own consciousness. The methodological solipsist believes that subjective impressions or innate knowledge are the sole possible or proper starting point for philosophical construction.
Solipsism
–
René Descartes. Portrait by Frans Hals, 1648.
12.
Truth
–
Truth is most often used to mean being in accord with fact or reality, or fidelity to an original or standard. Truth may often be used in modern contexts to refer to an idea of "truth to self," or authenticity. The commonly understood opposite of truth is falsehood, which, correspondingly, can also take on a logical, ethical meaning. The concept of truth is debated in several contexts, including philosophy, art, religion. Commonly, truth is thought to an independent reality, in what is sometimes called the correspondence theory of truth. Other philosophers take this common meaning to be secondary and derivative. Various views of truth continue to be debated among scholars, philosophers, theologians. The English word truth is derived from Old English tríewþ, tréowþ, trýwþ, Middle English trewþe, cognate to Old High German triuwida, Old Norse tryggð. Like troth, it is a - nominalisation of the adjective true. Old Norse trú, "word of honour; religious faith, belief". Thus, ` truth' involves that of "agreement with fact or reality", in Anglo-Saxon expressed by sōþ. All Germanic languages besides English have introduced a terminological distinction between truth "factuality". To express "factuality", North Germanic opted for nouns derived from sanna "to assert, affirm", while continental West Germanic opted for continuations of wâra "faith, pact". Romance languages use terms following the Latin veritas, while the Greek aletheia, South Slavic istina have separate etymological origins. Each presents perspectives that are widely shared by published scholars.
Truth
–
Time Saving Truth from Falsehood and Envy, François Lemoyne, 1737
Truth
–
Truth, holding a mirror and a serpent (1896). Olin Levi Warner, Library of Congress Thomas Jefferson Building, Washington, D.C.
Truth
–
An angel carrying the banner of "Truth", Roslin, Midlothian
Truth
–
Walter Seymour Allward 's Veritas (Truth) outside Supreme Court of Canada, Ottawa, Ontario Canada
13.
Uncertainty
–
Uncertainty is a situation which involves imperfect and/or unknown information. However, "uncertainty is an unintelligible expression without a straightforward description". It applies to predictions to the unknown. Uncertainty arises in partially observable and/or stochastic environments, as well as due to ignorance and/or indolence. A state of having limited knowledge where it is impossible to exactly describe the existing state, a future outcome, or more than one possible outcome. Risk A state of uncertainty where some possible outcomes have an undesired effect or significant loss. If probabilities are applied to the possible outcomes using weather forecasts or even just a calibrated probability assessment, the uncertainty has been quantified. Suppose it is quantified as a 90% chance of sunshine. Furthermore, if this is a business event and $100,000 would be lost if it rains, then the risk has been quantified. These situations can be made even more realistic by quantifying light rain vs. the cost of outright cancellation, etc.. Some may represent the risk in this example as the "expected opportunity loss" or the chance of the loss multiplied by the amount of the loss. That is useful if the organizer of the event is "risk neutral", which most people are not. Most would be willing to pay a premium to avoid the loss. An company, for example, would compute an EOL as a minimum for any coverage, then add onto that other operating costs and profit. Since many people are willing to buy insurance for many reasons, then clearly the EOL alone is not the perceived value of avoiding the risk.
Uncertainty
–
We are frequently presented with situations wherein a decision must be made when we are uncertain of exactly how to proceed.
14.
Agnosticism
–
Agnosticism is the view that certain metaphysical claims – such as the existence of God or the supernatural – are unknown and perhaps unknowable. Agnosticism is a doctrine or set of tenets rather than a religion as such. Thomas Henry Huxley, an English biologist, coined the word "agnostic" in 1869. The Nasadiya Sukta in the Rigveda is agnostic about the origin of the universe. Agnosticism is of the essence of science, whether ancient or modern. It simply means that a man shall not say he knows or believes that which he has no scientific grounds for professing to know or believe. Consequently, agnosticism puts aside not only the greater part of popular theology, but also the greater part of anti-theology. Agnosticism, in fact, is not a creed, but a method, the essence of which lies in the rigorous application of a single principle... And negatively: In matters of the intellect do not pretend that conclusions are certain which are not demonstrated or demonstrable. Being a scientist, above all else, Huxley presented agnosticism as a form of demarcation. A hypothesis with no supporting objective, testable evidence is not an objective, scientific claim. As such, there would be no way to test said hypotheses, leaving the results inconclusive. His agnosticism was not compatible with forming a belief as to the truth, or falsehood, of the claim at hand. Karl Popper would also describe himself as an agnostic. Others have redefined this concept, making it compatible with forming a belief, only incompatible with absolute certainty.
Agnosticism
–
Thomas Henry Huxley
Agnosticism
–
Robert G. Ingersoll
Agnosticism
–
Bertrand Russell
15.
Epistemology
–
Epistemology is the branch of philosophy concerned with the theory of knowledge. Epistemology studies the nature of knowledge, justification, the rationality of belief. The term'Epistemology' was first used by Scottish philosopher James Frederick Ferrier in 1854. However, according to Brett Warren, King James VI of Scotland had previously personified this philosophical concept as the character Epistemon in 1591. This philosophical approach signified a Philomath seeking to obtain greater knowledge through epistemology with the use of theology. The dialogue was used by King James to educate society on various concepts including the history and etymology of the subjects debated. The word epistemology is derived from the ancient Greek epistēmē meaning "knowledge" and the suffix -logy, meaning a logical "discourse" to". French philosophers then gave the term épistémologie a narrower meaning as'theory of knowledge.' E.g. Émile Meyerson opened his Identity and Reality, written in 1908, with the remark that the word'is becoming current' as equivalent to'the philosophy of the sciences.' Some philosophers think there is an important distinction between "knowing that," "knowing how," and "acquaintance-knowledge," with epistemology being primarily concerned with the first of these. While these distinctions are not explicit in English, they are defined explicitly in other languages. In French, Portuguese and Spanish, to know is translated using connaître, conhecer, conocer, respectively, whereas to know is translated using savoir or saber. Modern Greek has the verbs γνωρίζω and ξέρω. Italian has the verbs conoscere and sapere and the nouns for knowledge are conoscenza and sapienza. German has the verbs wissen and kennen.
Epistemology
–
Plato – Kant – Nietzsche
16.
Measure (mathematics)
–
In this sense, a measure is a generalization of the concepts of length, volume. For instance, the Lebesgue measure of the interval in the real numbers is its length in the everyday sense of the word – specifically, 1. A measure is a function that assigns a non-negative real number or + ∞ to subsets of a set X. This problem was resolved by defining measure only on a sub-collection of all subsets; the measurable subsets, which are required to form a σ-algebra. This means that countable intersections and complements of measurable subsets are measurable. Indeed, their existence is a non-trivial consequence of the axiom of choice. The main applications of measures are in the foundations of the Lebesgue integral, in ergodic theory. Ergodic theory considers measures that are invariant under, or arise naturally from, a dynamical system. Let X be a set and Σ a σ-algebra over X. Null empty set: μ = 0. The pair is called a measurable space, the members of Σ are called measurable sets. A triple is called a space. A measure is a measure with total measure one -- i.e. μ = 1. μ = 1. A space is a measure space with a probability measure.
Measure (mathematics)
–
Informally, a measure has the property of being monotone in the sense that if A is a subset of B, the measure of A is less than or equal to the measure of B. Furthermore, the measure of the empty set is required to be 0.
17.
Event (probability theory)
–
In probability theory, an event is a set of outcomes of an experiment to which a probability is assigned. An event defines a complementary event, together these define a Bernoulli trial: did the event occur or not? Typically, when the space is finite, any subset of the sample space is an event. However, this approach does not work well in cases where the space is uncountably infinite, most notably when the outcome is a real number. So, when defining a space it is possible, often necessary, to exclude certain subsets of the sample space from being events. An event, however, is any subset including any singleton set, the empty set and the sample space itself. Other events are proper subsets of the space that contain multiple elements. Since all events are sets, they are usually represented graphically using Venn diagrams. Such as the normal distribution, the sample space is the set of real numbers or some subset of the real numbers. Attempts to define probabilities for all subsets of the real numbers run into difficulties when one considers'badly behaved' sets, such as those that are nonmeasurable. Hence, it is necessary to restrict attention to a more limited family of subsets. The most natural choice is the Borel set derived from unions and intersections of intervals. However, the larger class of Lebesgue measurable sets proves more useful in practice. In the measure-theoretic description of probability spaces, an event may be defined as an element of a selected σ-algebra of subsets of the sample space. With a reasonable specification of the space, however, all events of interest are elements of the σ-algebra.
Event (probability theory)
–
A Venn diagram of an event. B is the sample space and A is an event. By the ratio of their areas, the probability of A is approximately 0.4.
18.
Mathematics
–
Mathematics is the study of topics such as quantity, structure, space, change. There is a range of views among philosophers as to the exact scope and definition of mathematics. Mathematicians use them to formulate new conjectures. Mathematicians resolve the falsity of conjectures by mathematical proof. When mathematical structures are good models of real phenomena, then mathematical reasoning can provide insight or predictions about nature. Through the use of logic, mathematics developed from counting, calculation, measurement, the systematic study of the shapes and motions of physical objects. Practical mathematics has been a human activity from far back as written records exist. The research required to solve mathematical problems can take years or even centuries of sustained inquiry. Rigorous arguments first appeared in Greek mathematics, most notably in Euclid's Elements. Galileo Galilei said, "The universe can not become familiar with the characters in which it is written. Without these, one is wandering about in a dark labyrinth." Carl Friedrich Gauss referred as "the Queen of the Sciences". Benjamin Peirce called mathematics "the science that draws necessary conclusions". David Hilbert said of mathematics: "We are not speaking here of arbitrariness in any sense. Mathematics is not like a game whose tasks are determined by arbitrarily stipulated rules.
Mathematics
–
Euclid (holding calipers), Greek mathematician, 3rd century BC, as imagined by Raphael in this detail from The School of Athens.
Mathematics
–
Greek mathematician Pythagoras (c. 570 – c. 495 BC), commonly credited with discovering the Pythagorean theorem
Mathematics
–
Leonardo Fibonacci, the Italian mathematician who established the Hindu–Arabic numeral system to the Western World
Mathematics
–
Carl Friedrich Gauss, known as the prince of mathematicians
19.
Statistics
–
Statistics is the study of the collection, analysis, interpretation, presentation, organization of data. Populations can be diverse topics such as "all people living in a country" or "every atom composing a crystal". Statistics deals with all aspects of data including the planning of collection in terms of the design of surveys and experiments. Statistician Sir Arthur Lyon Bowley defines statistics as "Numerical statements of facts in any department of inquiry placed to each other". When census data cannot be collected, statisticians collect data by developing specific experiment designs and survey samples. Representative sampling assures that conclusions can safely extend from the sample to the population as a whole. In contrast, an observational study does not involve experimental manipulation. Inferences on mathematical statistics are made under the framework of theory, which deals with the analysis of random phenomena. Working from a null hypothesis, two basic forms of error are recognized: Type errors and Type II errors. Multiple problems have come to be associated with this framework: ranging from obtaining a sufficient size to specifying an adequate null hypothesis. Measurement processes that generate statistical data are also subject to error. Other types of errors can also be important. Specific techniques have been developed to address these problems. Statistics continues to be an area of active research, for example on the problem of how to analyze Big data. Statistics is a mathematical body of science that pertains as a branch of mathematics.
Statistics
–
Scatter plots are used in descriptive statistics to show the observed relationships between different variables.
Statistics
–
More probability density is found as one gets closer to the expected (mean) value in a normal distribution. Statistics used in standardized testing assessment are shown. The scales include standard deviations, cumulative percentages, percentile equivalents, Z-scores, T-scores, standard nines, and percentages in standard nines.
Statistics
–
Gerolamo Cardano, the earliest pioneer on the mathematics of probability.
Statistics
–
Karl Pearson, a founder of mathematical statistics.
20.
Finance
–
Finance is a field that deals with the study of investments. It includes the dynamics of liabilities over time under conditions of different degrees of uncertainty and risk. Finance can also be defined as the science of management. Finance aims to price assets based on their expected rate of return. Finance can be broken into three different sub-categories: public finance, personal finance. Personal finance may also involve paying for a debt obligations. Household flow totals up all the expected sources of income within a year, minus all expected expenses within the same year. From this analysis, the financial planner can determine in what time the personal goals can be accomplished. Adequate protection: the analysis of how to protect a household from unforeseen risks. These risks can be divided into the following: liability, property, death, disability, health and long care. Some of these risks may be self-insurable, while most will require the purchase of an contract. Determining how much insurance to get, at the most cost effective terms requires knowledge of the market for personal insurance. Business owners, professionals, entertainers require specialized insurance professionals to adequately protect themselves. Since insurance also enjoys some tax benefits, utilizing investment products may be a critical piece of the overall investment planning. Tax planning: typically the tax is the single largest expense in a household.
Finance
–
London Stock Exchange, global center of finance.
Finance
Finance
–
Wall Street, the center of American finance.
21.
Gambling
–
Gambling thus requires three elements be present: consideration, chance and prize. The gaming in this context typically refers to instances in which the activity has been specifically permitted by law. However, this distinction is not universally observed in the English-speaking world. In the United Kingdom, the regulator of gambling activities is called the Gambling Commission. Gambling is also a major commercial activity, with the legal gambling market totaling an estimated $335 billion in 2009. In other forms, gambling can be conducted with materials which are not real money. Popular games played in modern casinos originate from Europe and China. Games such as baccarat, roulette, blackjack originate from different areas of Europe. An ancient Chinese lottery game, is played in casinos around the world. In addition, a hybrid between pai gow and poker is also played. Many jurisdictions, local well as national, either ban gambling or heavily control it by licensing the vendors. Such regulation generally leads to illegal gambling in the areas where it is not allowed. There is generally legislation requiring that the odds in gaming devices are statistically random, to prevent manufacturers from making some high-payoff results impossible. Since these high-payoffs have very low probability, a bias can quite easily be missed unless the odds are checked carefully. Most jurisdictions that allow gambling require participants to be above a certain age.
Gambling
–
Caravaggio, The Cardsharps, c. 1594
Gambling
–
Gamblers in the Ship of Fools, 1494
Gambling
–
Bag with 65 Inlaid Gambling Sticks, Tsimshian (Native American), 19th century, Brooklyn Museum
Gambling
–
The Caesars Palace main fountain. The statue is a copy of the ancient Winged Victory of Samothrace.
22.
Science
–
Science is a systematic enterprise that builds and organizes knowledge in the form of testable explanations and predictions about the universe. The formal sciences are often excluded as they do not depend on empirical observations. Disciplines which use science, like medicine, may also be considered to be applied sciences. However, during the Islamic Golden Age foundations for the scientific method were laid by Ibn al-Haytham in his Book of Optics. In the 18th centuries, scientists increasingly sought to formulate knowledge in terms of physical laws. It was during this time that scientific disciplines such as biology, physics reached their modern shapes. Science in a broad sense existed in many historical civilizations. Modern science is successful in its results, so it now defines what science is in the strictest sense of the term. Science in its original sense was a word for a type of knowledge rather than a specialized word for the pursuit of such knowledge. In particular, it was the type of knowledge which people can communicate to share. For example, knowledge about the working of natural things was led to the development of complex abstract thought. This is shown by the construction of techniques for making poisonous plants edible, buildings such as the Pyramids. They were mainly theorists, particularly interested in astronomy. In contrast, trying to use knowledge of nature to imitate nature was seen as a more appropriate interest for lower class artisans. A clear-cut distinction between empirical science was made by the pre-Socratic philosopher Parmenides.
Science
–
Maize, known in some English-speaking countries as corn, is a large grain plant domesticated by indigenous peoples in Mesoamerica in prehistoric times.
Science
–
The scale of the universe mapped to the branches of science and the hierarchy of science.
Science
–
Aristotle, 384 BC – 322 BC, - one of the early figures in the development of the scientific method.
Science
–
Galen (129—c.216) noted the optic chiasm is X-shaped. (Engraving from Vesalius, 1543)
23.
Physics
–
One of the main goal of physics is to understand how the universe behaves. Physics is one of perhaps the oldest through its inclusion of astronomy. The boundaries of physics are not rigidly defined. New ideas in physics often explain the fundamental mechanisms of other sciences while opening new avenues of research in areas such as philosophy. Physics also makes significant contributions through advances in new technologies that arise from theoretical breakthroughs. The United Nations named the World Year of Physics. Astronomy is the oldest of the natural sciences. The planets were often a target of worship, believed to represent their gods. While the explanations for these phenomena were often unscientific and lacking in evidence, these early observations laid the foundation for later astronomy. In the book, he was also the first to delved further into the way the eye itself works. Fellow polymaths, from Robert Grosseteste and Leonardo da Vinci to René Descartes, Johannes Kepler and Isaac Newton, were in his debt. Indeed, the influence of Ibn al-Haytham's Optics ranks alongside that of Newton's work of the same title, published 700 years later. The translation of The Book of Optics had a huge impact on Europe. From it, later European scholars were able to build the same devices as what Ibn Al Haytham understand the way light works. From this, important things as eyeglasses, magnifying glasses, telescopes, cameras were developed.
Physics
–
Further information: Outline of physics
Physics
–
Ancient Egyptian astronomy is evident in monuments like the ceiling of Senemut's tomb from the Eighteenth Dynasty of Egypt.
Physics
–
Sir Isaac Newton (1643–1727), whose laws of motion and universal gravitation were major milestones in classical physics
Physics
–
Albert Einstein (1879–1955), whose work on the photoelectric effect and the theory of relativity led to a revolution in 20th century physics
24.
Artificial intelligence
–
Artificial intelligence is intelligence exhibited by machines. Colloquially, the term "artificial intelligence" is applied when a machine mimics "cognitive" functions that humans associate with other human minds, such as "learning" and "problem solving". As machines become increasingly capable, mental facilities once thought to require intelligence are removed from the definition. For example, optical character recognition is no longer perceived as an exemplar of "artificial intelligence", having become a routine technology. Capabilities currently classified as AI include successfully understanding human speech, competing at a high level in strategic game systems, self-driving cars, interpreting complex data. Some people also consider AI a danger to humanity if it progresses unabatedly. The central problems of AI research include reasoning, knowledge, planning, natural the ability to move and manipulate objects. General intelligence is among the field's long-term goals. Approaches include symbolic AI. Many tools are used including methods based on probability and economics. The AI field draws upon computer science, mathematics, artificial psychology. The field was founded on the claim that human intelligence "can be so precisely described that a machine can be made to simulate it." In the twenty-first century AI techniques became an essential part of the technology industry, helping to solve many challenging problems in computer science. With his Calculus ratiocinator, Gottfried Leibniz extended the concept of the calculating machine, intending to perform operations on concepts rather than numbers. Since the 19th century, artificial beings are common in fiction, as in Mary Shelley's Frankenstein or Karel Čapek's R.U.R.
Artificial intelligence
–
Kismet, a robot with rudimentary social skills
Artificial intelligence
–
An ontology represents knowledge as a set of concepts within a domain and the relationships between those concepts.
Artificial intelligence
–
Main articles
25.
Machine learning
–
Machine learning is the subfield of computer science that "gives computers the ability to learn without being explicitly programmed". Learning is closely related to computational statistics, which also focuses in prediction-making through the use of computers. It has strong ties to mathematical optimization, which delivers methods, theory and domains to the field. Learning is sometimes conflated with data mining, where the latter subfield focuses more on exploratory data analysis and is known as unsupervised learning. In the proposal he explores the various characteristics that could be possessed in constructing one. Learning tasks are typically classified into three broad categories, depending on the nature of the learning "signal" or "feedback" available to a learning system. Unsupervised learning: No labels are given to the learning algorithm, leaving it on its own to find structure in its input. Unsupervised learning can be a means towards an end. Another example is learning to play a game by playing against an opponent. Between supervised and unsupervised learning is semi-supervised learning, where the teacher gives an incomplete training signal: a training set with some of the target outputs missing. Among other categories of machine learning problems, learning to learn learns its own bias based on previous experience. This is typically tackled in a supervised way. The classes are "spam" and "not spam". In regression, also a supervised problem, the outputs are continuous rather than discrete. In clustering, a set of inputs is to be divided into groups.
Machine learning
–
Machine learning and data mining
26.
Computer science
–
Computer science is the study of the theory, experimentation, engineering that form the basis for the design and use of computers. An alternate, more succinct definition of science is the study of automating that scale. A scientist specializes in the design of computational systems. Its fields can be divided into a variety of theoretical and practical disciplines. Some fields, such as computational theory, are highly abstract, while fields such as computer graphics emphasize visual applications. Other fields still focus on challenges in implementing computation. Human–computer interaction considers the challenges in making computers and computations useful, usable, universally accessible to humans. The earliest foundations of what would become science predate the invention of the digital computer. Machines for calculating fixed numerical tasks such as the abacus have existed since antiquity, aiding in computations such as multiplication and division. Further, algorithms for performing computations have existed since antiquity, even before the development of sophisticated computing equipment. Wilhelm Schickard designed and constructed the first working mechanical calculator in 1623. In 1673, Gottfried Leibniz demonstrated a digital mechanical calculator, called the Stepped Reckoner. He may be considered the first computer scientist and theorist, among other reasons, documenting the binary number system. He started developing this machine in 1834 and "in less than two years he had sketched out many of the salient features of the modern computer". "A crucial step was the adoption of a punched card system derived from the Jacquard loom" making it infinitely programmable.
Computer science
–
Ada Lovelace is credited with writing the first algorithm intended for processing on a computer.
Computer science
Computer science
–
The German military used the Enigma machine (shown here) during World War II for communications they wanted kept secret. The large-scale decryption of Enigma traffic at Bletchley Park was an important factor that contributed to Allied victory in WWII.
Computer science
–
Digital logic
27.
Game theory
–
Game theory is "the study of mathematical models of conflict and cooperation between intelligent rational decision-makers." Game theory is mainly used in psychology, as well as logic, computer science and biology. Originally, it addressed zero-sum games, in which one person's gains result in losses for the other participants. Modern theory began with the idea regarding the existence of mixed-strategy equilibria by John von Neumann. Von Neumann's original proof used Brouwer fixed-point theorem into compact convex sets, which became a standard method in mathematical economics. His paper was followed by the 1944 book Theory of Games and Economic Behavior, co-written with Oskar Morgenstern, which considered cooperative games of several players. The second edition of this book provided an axiomatic theory of expected utility, which allowed mathematical statisticians and economists to treat decision-making under uncertainty. This theory was developed extensively in the 1950s by many scholars. Game theory was later explicitly applied to biology in the 1970s, although similar developments go back at least as far as the 1930s. Game theory has been widely recognized as an important tool in many fields. With the Nobel Memorial Prize in Economic Sciences going to theorist Jean Tirole in 2014, game-theorists have now won the economics Nobel Prize. John Maynard Smith was awarded the Crafoord Prize for his application of game theory to biology. Early discussions of examples of two-person games occurred long before the rise of mathematical theory. James Madison made what we now recognize as a game-theoretic analysis of the ways states can be expected to behave under different systems of taxation. In 1913 Ernst Zermelo published Über eine Anwendung der Mengenlehre auf die Theorie des Schachspiels.
Game theory
–
An extensive form game
28.
Philosophy
–
Philosophy is the study of general and fundamental problems concerning matters such as existence, knowledge, values, reason, mind, language. The term was probably coined by Pythagoras. Philosophical methods include questioning, systematic presentation. Philosophical questions include: Is it possible to prove it? What is most real? However, philosophers might also pose more practical and concrete questions such as: Is there a best way to live? Is it better to be just or unjust? Do humans have free will? Historically, "philosophy" encompassed any body of knowledge. From the time of Greek philosopher Aristotle to the 19th century, "natural philosophy" encompassed astronomy, physics. For example, Newton's 1687 Mathematical Principles of Natural Philosophy later became classified as a book of physics. In the 19th century, the growth of modern research universities led academic philosophy and other disciplines to professionalize and specialize. In the modern era, some investigations that were traditionally part of philosophy became academic disciplines, including sociology, linguistics and economics. Other investigations closely related to art, other pursuits remained part of philosophy. For example, is beauty objective or subjective?
Philosophy
–
René Descartes
Philosophy
–
Thomas Aquinas
Philosophy
–
Jeremy Bentham
Philosophy
–
Thomas Hobbes
29.
Complex systems
–
Complex systems present problems both in mathematical modelling and philosophical foundations. One of a variety of journals using this approach to complexity is Complex Systems. Such systems are used in computer science, biology, economics, physics, chemistry, architecture and many other fields. It is also called complex systems theory, complexity science, study of complex systems, complex networks, network science, sciences of complexity, historical physics. A variety of abstract complex systems is studied as a field of mathematics. The key problems of complex systems are difficulties with their formal simulation. In different research contexts complex systems are defined on the basis of their different attributes. Since all complex systems have many interconnected components, the science of networks and theory are important and useful tools for the study of complex systems. A theory for the resilience of system of systems represented by a network of interdependent networks was developed by Buldyrev et al. A consensus regarding a universal definition of complex system does not yet exist. The study of complex system models is used for many scientific questions poorly suited to the traditional mechanistic conception provided by science. Linear systems represent the main class of systems for which general techniques for stability analysis exist. However, engineering practice must now include elements of complex systems research. This debate would notably lead economists, other parties to explore the question of computational complexity. The first institute focused on the Santa Fe Institute, was founded in 1984.
Complex systems
–
Complex systems
Complex systems
–
A Braitenberg simulation, programmed in breve, an artificial life simulator
Complex systems
–
A complex adaptive system model
Complex systems
–
This is a schematic representation of three types of mathematical models of complex systems with the level of their mechanistic understanding.
30.
Probability interpretations
–
The word probability has been used in a variety of ways since it was first applied to the mathematical study of games of chance. In answering such questions, mathematicians interpret the probability values of probability theory. There are two broad categories of probability interpretations which can be called "physical" and "evidential" probabilities. Physical probabilities, which are also called frequency probabilities, are associated with physical systems such as roulette wheels, rolling dice and radioactive atoms. In such systems, a given type of event tends to occur at a persistent rate, or "relative frequency", in a long run of trials. Physical probabilities either explain, or are invoked to explain, these stable frequencies. The two main kinds of theory of physical probability are frequentist accounts and propensity accounts. On most accounts, evidential probabilities are considered to be degrees of belief, defined in terms of dispositions to gamble at certain odds. The four main evidential interpretations are the classical interpretation, the logical interpretation. There are also evidential interpretations of probability covering groups, which are often labelled as'intersubjective'. Some interpretations of probability are associated including theories of estimation and hypothesis testing. The physical interpretation, for example, is taken by followers of "frequentist" statistical methods, such as Egon Pearson. This article, however, focuses on the interpretations of probability rather than theories of statistical inference. The terminology of this topic is rather confusing, in part because probabilities are studied within a variety of academic fields. The word "frequentist" is especially tricky.
Probability interpretations
–
The classical definition of probability works well for situations with only a finite number of equally-likely outcomes.
Probability interpretations
–
For frequentists, the probability of the ball landing in any pocket can be determined only by repeated trials in which the observed result converges to the underlying probability in the long run.
Probability interpretations
–
Gambling odds reflect the average bettor's 'degree of belief' in the outcome.
31.
Experiment
–
An experiment is a procedure carried out to support, refute, or validate a hypothesis. Experiments provide insight by demonstrating what outcome occurs when a particular factor is manipulated. Experiments always rely on repeatable procedure and logical analysis of the results. There also exists experimental studies. Other types of hands-on activities are very important to student learning in the science classroom. Experiments can help a student become more engaged and interested in the material they are learning, especially when used over time. Experiments can vary from informal natural comparisons, to highly controlled. Uses of experiments vary considerably between the human sciences. Experiments typically include controls, which are designed to minimize the effects of variables other than the independent variable. This increases the reliability of the results, often through a comparison between the other measurements. Scientific controls are a part of the scientific method. Ideally, none are uncontrolled. In the scientific method, an experiment is an empirical procedure that arbitrates between hypotheses. Researchers also use experimentation to test new hypotheses to support or disprove them. An experiment usually tests a hypothesis, an expectation about how phenomenon works.
Experiment
–
Even very young children perform rudimentary experiments to learn about the world and how things work.
Experiment
–
Original map by John Snow showing the clusters of cholera cases in the London epidemic of 1854
32.
Random
–
Randomness is the lack of pattern or predictability in events. A random sequence of events, steps has no order and does not follow an intelligible pattern or combination. In many cases the frequency of different outcomes over a large number of events is predictable. For example, when throwing two dice, a sum of 7 will occur twice as often as 4. In this view, randomness is a measure of uncertainty of an outcome, rather than applies to concepts of chance, probability, information entropy. The fields of mathematics, statistics use formal definitions of randomness. In statistics, a random variable is an assignment of a numerical value to each possible outcome of an space. This association facilitates the calculation of probabilities of the events. Random variables can appear in random sequences. A random process is a sequence of random variables whose outcomes do not follow an evolution described by probability distributions. Other constructs are extremely useful in probability theory and the various applications of randomness. Randomness is most often used in statistics to signify statistical properties. Monte Carlo methods, which rely on random input, are important techniques in science, as, for instance, in computational science. By analogy, quasi-Monte Carlo methods use number generators. With a bowl containing just 10 red marbles and 90 blue marbles, a random selection mechanism would choose a red marble with probability 1/10.
Random
–
Ancient fresco of dice players in Pompei.
Random
–
A pseudorandomly generated bitmap.
Random
–
The ball in a roulette can be used as a source of apparent randomness, because its behavior is very sensitive to the initial conditions.
33.
Frequentist probability
–
This interpretation supports the statistical needs of experimental scientists and pollsters; probabilities can be found by a repeatable objective process. It does not support all needs; gamblers typically require estimates of the odds without experiments. The development of the frequentist account was motivated by the problems and paradoxes of the previously dominant viewpoint, the classical interpretation. This classical interpretation stumbled at any statistical problem that has no natural symmetry for reasoning. In the frequentist interpretation, probabilities are discussed only when dealing with well-defined random experiments. The set of all possible outcomes of a random experiment is called the sample space of the experiment. An event is defined as a particular subset of the sample space to be considered. For any given event, only one of two possibilities may hold: it occurs or it does not. The relative frequency of occurrence of an event, observed in a number of repetitions of the experiment, is a measure of the probability of that event. This is the core conception of probability in the frequentist interpretation. Clearly, as the number of trials is increased, one might expect the relative frequency to become a better approximation of a "true frequency". The frequentist interpretation is a philosophical approach to the definition and use of probabilities; it is one of several such approaches. It does not claim to capture all connotations of the concept'probable' in colloquial speech of natural languages. It offers distinct guidance in the construction and design of practical experiments, especially when contrasted with the Bayesian interpretation. As to whether this guidance is useful, or is apt to mis-interpretation, has been a source of controversy.
Frequentist probability
–
John Venn
34.
Likelihood function
–
In statistics, a likelihood function is a function of the parameters of a statistical model given data. Likelihood functions play a key role in statistical inference, especially methods of estimating a parameter from a set of statistics. In informal contexts, "likelihood" is often used as a synonym for "probability." In statistics, a distinction is made depending on the roles of outcomes vs. parameters. Probability is used before data are available to describe possible future outcomes given a fixed value for the parameter. Likelihood is used after data are available to describe a function of a parameter for a given outcome. The function is defined differently for continuous probability distributions. Let X be a random variable depending on a θ. Then the L = θ = P θ considered as a function of θ, is called the likelihood function. Let X be a random variable following an absolutely continuous distribution with function f depending on a parameter θ. Then the L = f θ, considered as a function of θ, is called the function. In measure-theoretic theory, the function is defined as the Radon-Nikodym derivative of the probability distribution relative to a dominating measure. This provides a mixture or something else.. . For many applications, the natural logarithm of the likelihood function, called the log-likelihood, is more convenient to work with.
Likelihood function
–
The likelihood function for estimating the probability of a coin landing heads-up without prior knowledge after observing HH
35.
History of probability
–
Probability is distinguished from statistics. . While statistics deals from it, probability deals with the stochastic processes which lie behind data or outcomes. The mathematical sense of the term is from 1718. In the 18th century, the chance was also used in the mathematical sense of "probability". This word is ultimately from Latin cadentia, i.e. "case". Similarly, the derived likelihood had a meaning of "similarity, resemblance" but took on a meaning of "probability" from the mid 15th century. Medieval law of evidence developed a grading of degrees of proof, probabilities, presumptions and half-proof to deal with the uncertainties of evidence in court. Christiaan Huygens gave a comprehensive treatment of the subject. From Games, Gambling ISBN 978-0-85264-171-2 by F. N. David: In ancient times there were games played using astragali, or Talus bone. In Egypt, excavators of tombs found a game they called "Jackals", which closely resembles the modern game "Snakes and Ladders". It seems that this is the early stages of the creation of dice. First dice game mentioned in literature of the Christian era was called Hazard. Played with 3 dice. Thought to have been brought by the knights returning from the Crusades.
History of probability
36.
Likelihood
–
In statistics, a likelihood function is a function of the parameters of a statistical model given data. Likelihood functions play a key role in statistical inference, especially methods of estimating a parameter from a set of statistics. In informal contexts, "likelihood" is often used as a synonym for "probability." In statistics, a distinction is made depending on the roles of vs. parameters. Probability is used before data are available to describe future outcomes given a fixed value for the parameter. Likelihood is used after data are available to describe a function of a parameter for a given outcome. The function is defined differently for discrete and continuous probability distributions. Let X be a random variable depending on a parameter θ. Then the function L = θ = P θ considered as a function of θ, is called the likelihood function. Let X be a random variable following an absolutely continuous distribution with density function f depending on a parameter θ. Then the L = f θ, considered as a function of θ, is called the likelihood function. In measure-theoretic theory, the density function is defined as the Radon-Nikodym derivative of the probability distribution relative to a dominating measure. This provides discrete, absolutely continuous, a mixture or something else.. . For many applications, the natural logarithm of the function, called the log-likelihood, is more convenient to work with.
Likelihood
–
The likelihood function for estimating the probability of a coin landing heads-up without prior knowledge after observing HH
37.
Witness
–
Eyewitness is one who testifies what they perceived through his or her senses. A hearsay witness is one who testifies what someone else wrote. In most court proceedings there are many limitations on when hearsay evidence is admissible. Also some types of statements are not subject to such limitations. A witness is one who testifies about the reputation of a person or business entity, when reputation is material to the dispute at issue. Sometimes the testimony is provided in a confidential setting. Although informally a witness includes whoever perceived the event, in law, a witness is different from an informant. A subpoena commands a person to appear. It is used to compel the testimony of a witness in a trial. In many jurisdictions, it is compulsory to comply, to tell the truth, under penalty of perjury. In a proceeding, a witness may be called by either the prosecution or the defense. The side that calls the witness first asks questions, in what is called direct examination. The opposing side then may ask their own questions in what is called cross-examination. In some cases, examination may then be used by the side that called the witness, but usually only to contradict specific testimony from the cross-examination. Recalling a witness means calling a witness, who has already given testimony in a proceeding, to give further testimony.
Witness
–
Heinrich Buscher (de) as a witness during the Nuremberg Trials
38.
Legal case
–
He was an Irish-bred British-trained Thoroughbred racehorse and sire. Legal Case did win the Premio Roma in 1990. After his retirement from racing Legal Case had some success as a breeding stallion in Brazil. He was a horse with no white markings bred in Ireland by Ovidstown Investments Ltd.. Legal Case was sired by the dual Prix de l'Arc de Triomphe winner Alleged out of the Maryinsky. Alleged was a strong influence for stamina: his best winners included Miss Alleged, Shantou, Law Society and Midway Lady. Maryinsky won two minor races in 1980. During his career he was owned by the businessman Sir Gordon White and trained at the Bedford House stable in Newmarket by Luca Cumani. Legal Case was beaten three lengths by the Michael Stoute-trained colt Dolpour, with Opening Verse finishing fifth of the seven runners. In September he was moved up over ten furlongs at Goodwood Racecourse. Ridden by Dettori Legal Case started the 7/4 favourite against four opponents. The other contenders included the improving handicapper Braashee, the Royal Lodge Stakes winner High Estate and Ile de Nisky. The final strides saw Legal Case racing neck-and-neck before crossing the line together. After a finish, he was declared the winner by a head from Dolpour, with Ile de Chypre a short head away in third. In 1990 Dettori took over as Cumani's stable jockey.
Legal case
39.
Europe
–
Europe is a continent that comprises the westernmost part of Eurasia. Europe is bordered by the Arctic Ocean to the north, the Mediterranean Sea to the south. Yet the non-oceanic borders of Europe—a concept dating back to classical antiquity—are arbitrary. Europe covers 2 % of the Earth's surface. Europe had a total population of about million as of 2012. Further from the Atlantic, seasonal differences are mildly greater than close to the coast. Europe, in ancient Greece, is the birthplace of Western civilization. The Renaissance humanism, exploration, art, science led the "old continent", eventually the rest of the world, to the modern era. From this period onwards, Europe played a predominant role in global affairs. Between the 20th centuries, European nations controlled at various times the Americas, most of Africa, Oceania, the majority of Asia. In 1955, the Council of Europe was formed following a speech by Sir Winston Churchill, with the idea of unifying Europe to achieve common goals. It includes all states except for Belarus, Kazakhstan and Vatican City. European integration by some states led to the formation of the European Union, a separate political entity that lies between a confederation and a federation. The EU has been expanding eastward since the fall of the Soviet Union in 1991. The European Anthem states celebrate peace and unity on Europe Day.
Europe
–
Reconstruction of Herodotus ' world map
Europe
Europe
–
A medieval T and O map from 1472 showing the three continents as domains of the sons of Noah — Asia to Sem (Shem), Europe to Iafeth (Japheth), and Africa to Cham (Ham)
Europe
–
Early modern depiction of Europa regina ('Queen Europe') and the mythical Europa of the 8th century BC.
40.
Nobility
–
There is often a variety of ranks within the noble class. Some countries have had non-hereditary nobility, such as the Empire of Brazil. The term derives from Latin nobilitas, the abstract noun of the adjective nobilis. In modern usage, "nobility" is applied to the highest social class in pre-modern societies, excepting the ruling dynasty. Nobility is a historical, often legal notion, differing from socio-economic status in that the latter is mainly based on income, possessions and/or lifestyle. Being influential can not, ipso facto, are all nobles wealthy or influential. Various republics, including former Iron Curtain countries, Greece, Mexico, Austria have expressly abolished the conferral and use of titles of nobility for their citizens. Not all of the benefits of nobility derived from noble status per se. Usually privileges were recognised by the monarch in association with possession of a specific title, estate. Most nobles' wealth derived from one or more estates, small, that might include fields, pasture, orchards, timberland, hunting grounds, etc.. It also included infrastructure such as mill to which local peasants were allowed some access, although often at a price. Nobles were expected to live "nobly", from the proceeds of these possessions. Work involving manual subordination to those of lower rank was either frowned upon socially. In some countries, the local lord could impose legal undertakings. Nobles exclusively enjoyed the privilege of hunting.
Nobility
–
Detail from Très Riches Heures du Duc de Berry (The Very Rich Hours of the Duke of Berry), c. 1410, month of April
Nobility
–
Nobility offered protection in exchange for service
Nobility
–
French aristocrats, c. 1774
Nobility
–
A French political cartoon of the three orders of feudal society (1789). The rural third estate carries the clergy and the nobility.
41.
Italians
–
Italians are a nation and ethnic group native to Italy who share a common Italian culture, ancestry and speak the Italian language as a mother tongue. Italians have greatly contributed to science, arts, technology, cuisine, sports, jurisprudence and banking both abroad and worldwide. Italian people are generally known to clothing and family values. The term Italian has a history that goes back to pre-Roman Italy. Greek historian Dionysius of Halicarnassus states this account together with the legend that Italy was named after Italus, mentioned also by Aristotle and Thucydides. This period of unification was followed by one of conquest beginning with the First Punic War against Carthage. In the course of the century-long struggle against Carthage, the Romans conquered Sicily, Sardinia and Corsica. The final victor, was accorded the title of Augustus by the Senate and thereby became the first Roman emperor. Emperor Diocletian's administrative division of the empire into two parts in 285 provided only temporary relief; it became permanent in 395. In 313, churches thereafter rose throughout the empire. However, he also moved his capital to Constantinople greatly reducing the importance of the former. Romulus Augustulus, was deposed in 476 by a Germanic foederati general in Italy, Odoacer. His defeat marked the end of the western part of the Roman Empire. Odoacer ruled well after gaining control of Italy in 476. Then he was defeated by Theodoric, the king of another Germanic tribe, the Ostrogoths.
Italians
–
Amerigo Vespucci, the notable geographer and traveller from whose name the word America is derived.
Italians
Italians
–
Christopher Columbus, the discoverer of the New World.
Italians
–
Laura Bassi, the first chairwoman of a university in a scientific field of studies.
42.
Gerolamo Cardano
–
He wrote more than 200 works on science. He made significant contributions to hypocycloids, published in De proportionibus, in 1570. The generating circles of these hypocycloids were used for the construction of the first high-speed printing presses. He is well known for his achievements in algebra. He was born in the illegitimate child of Fazio Cardano, a mathematically gifted jurist, lawyer, a close personal friend of Leonardo da Vinci. She was in labour for three days. Shortly before his birth, his mother had to escape the Plague; her three other children died from the disease. He had a difficult time finding work after his studies had ended. In 1525, Cardano repeatedly was not admitted owing to his combative reputation and illegitimate birth. He recovered and married Lucia Banderini in 1531. Before her death in 1546, she bore three children, Giovanni Battista, Chiara and Aldo. Cardano was the first mathematician to make systematic use of numbers less than zero. In Opus novum de proportionibus he introduced the binomial theorem. Cardano kept himself solvent by being an accomplished gambler and chess player. He used the game of throwing dice to understand the basic concepts of probability.
Gerolamo Cardano
–
Gerolamo Cardano
Gerolamo Cardano
–
De propria vita, 1821
Gerolamo Cardano
–
Portrait of Cardano on display at the School of Mathematics and Statistics, University of St Andrews.
Gerolamo Cardano
–
A portrait of Gerolamo Cardano
43.
Pierre de Fermat
–
Fermat optics. He is best known for Fermat's Last Theorem, which he described at the margin of a copy of Diophantus' Arithmetica. Fermat was born in the first decade of the 17th century in Beaumont-de-Lomagne, France -- the 15th-century mansion where Fermat was born is now a museum. His mother was either Claire de Long. Pierre was almost certainly brought up in the town of his birth. It was probably at the Collège de Navarre in Montauban. Fermat received a bachelor in civil law in 1626, before moving to Bordeaux. There Fermat became much influenced by the work of François Viète. Fermat held this office for the rest of his life. He thereby became entitled to change his name from Pierre Fermat to Pierre de Fermat. Fermat communicated most of his work in letters to friends, often with no proof of his theorems. In some of these letters to his friends Fermat explored many of the fundamental ideas of calculus before Newton or Leibniz. He was a trained lawyer making mathematics more of a hobby than a profession. Nevertheless, Fermat made important contributions to analytical geometry, probability, theory calculus. Secrecy was common in mathematical circles at the time.
Pierre de Fermat
–
Pierre de Fermat
Pierre de Fermat
–
Bust in the Salle des Illustres in Capitole de Toulouse
Pierre de Fermat
–
Place of burial of Pierre de Fermat in Place Jean Jaurés, Castres. Translation of the plaque: in this place was buried on January 13, 1665, Pierre de Fermat, councilor of the chamber of Edit [Parlement of Toulouse] and mathematician of great renown, celebrated for his theorem, a n + b n ≠ c n for n>2
Pierre de Fermat
–
Holographic will handwritten by Fermat on 4 March 1660 — kept at the Departmental Archives of Haute-Garonne, in Toulouse
44.
Blaise Pascal
–
Blaise Pascal was a French mathematician, physicist, inventor, writer and Christian philosopher. He was a prodigy, educated by his father, a tax collector in Rouen. Pascal also wrote in defence of the scientific method. In 1642, while still a teenager, he started some pioneering work on calculating machines. Following Galileo Galilei and Torricelli, in 1646, he rebutted Aristotle's followers who insisted that nature abhors a vacuum. Pascal's results caused many disputes before being accepted. In 1646, his sister Jacqueline identified with the religious movement within Catholicism known by its detractors as Jansenism. His father died in 1651. Following a religious experience in late 1654, he began writing influential works on theology. His two most famous works date from this period: the Lettres provinciales and the Pensées, the former set in the conflict between Jansenists and Jesuits. In that year, he also wrote an important treatise on the arithmetical triangle. Between 1659 he wrote on the cycloid and its use in calculating the volume of solids. He died just two months after his 39th birthday. Pascal was born in Clermont-Ferrand, in France's Auvergne region. He lost Antoinette Begon, at the age of three.
Blaise Pascal
–
Painting of Blaise Pascal made by François II Quesnel for Gérard Edelinck in 1691.
Blaise Pascal
–
An early Pascaline on display at the Musée des Arts et Métiers, Paris
Blaise Pascal
–
Portrait of Pascal
Blaise Pascal
–
Pascal studying the cycloid, by Augustin Pajou, 1785, Louvre
45.
Christiaan Huygens
–
Christiaan Huygens, FRS was a prominent Dutch mathematician and scientist. He is known particularly as an astronomer, physicist, horologist. Huygens was a leading scientist of his time. He pioneered work on games of chance. Christiaan Huygens was born into a rich and influential Dutch family, the second son of Constantijn Huygens. Christiaan was named after his paternal grandfather. His mother was Suzanna van Baerle. She died in 1637, shortly after the birth of Huygens' sister. The couple had five children: Constantijn, Christiaan, Lodewijk, Philips and Suzanna. Constantijn Huygens was a advisor to the House of Orange, also a poet and musician. His friends included Galileo Galilei, René Descartes. Huygens was educated until turning sixteen years old. He liked to play with miniatures of other machines. His father gave a liberal education: he studied languages and music, history and geography, mathematics, logic and rhetoric, but also dancing, fencing and horse riding. In 1644 Huygens had as his mathematical tutor Jan Jansz de Jonge Stampioen, who set the 15-year-old a demanding reading list on contemporary science.
Christiaan Huygens
–
Christiaan Huygens by Bernard Vaillant, Museum Hofwijck, Voorburg
Christiaan Huygens
–
Correspondance
Christiaan Huygens
–
The catenary in a manuscript of Huygens.
Christiaan Huygens
–
Christiaan Huygens, relief by Jean-Jacques Clérion, around 1670?
46.
Jakob Bernoulli
–
Jacob Bernoulli was one of the many prominent mathematicians in the Bernoulli family. He had sided with Leibniz during the Leibniz -- Newton calculus controversy. He is known for his numerous contributions to calculus, along with his brother Johann, was one of the founders of the calculus of variations. Jacob Bernoulli was born in Basel, Switzerland. Following his father's wish, he entered the ministry. But contrary to the desires of his parents, he also studied mathematics and astronomy. He traveled from 1676 to 1682 learning about the latest discoveries in mathematics and the sciences under leading figures of the time. This included the work of Johannes Hudde, Robert Hooke. During this time he also produced an incorrect theory of comets. Bernoulli began teaching mechanics at the University in Basel from 1683. In 1684 he married Judith Stupanus; and they had two children. During this decade, he also began a fertile career. His travels allowed him to establish correspondence with leading mathematicians and scientists of his era, which he maintained throughout his life. He also studied John Wallis, leading to his interest in infinitesimal geometry. Apart from these, it was between 1689 that many of the results that were to make up Ars Conjectandi were discovered.
Jakob Bernoulli
–
Jakob Bernoulli
Jakob Bernoulli
–
Jacob Bernoulli's grave.
47.
Ars Conjectandi
–
The importance of this early work had a large impact on both later mathematicians; for example, Abraham de Moivre. Bernoulli wrote the text between 1689, including the work of mathematicians such as Christiaan Huygens, Gerolamo Cardano, Pierre de Fermat, Blaise Pascal. Core topics such as expected value, were also a significant portion of this important work. In 1665 Pascal posthumously published his results on the an important combinatorial concept. He referred in his work Traité du triangle arithmétique as the "arithmetic triangle". In 1662, the book La Logique l'Art de Penser was published anonymously in Paris. The authors presumably were Pierre Nicole, two leading Jansenists, who worked together with Blaise Pascal. The Latin title of this book is Ars cogitandi, a successful book on logic of the time. De Witt's work was not widely distributed by mob in 1672. Thus probability could be more than mere combinatorics. Thus it is largely upon these foundations that Ars Conjectandi is constructed. Bernoulli’s progress over time can be pursued by means of the Meditationes. Three working periods with respect to his "discovery" can be distinguished by times. Finally, in the last period, the problem of measuring the probabilities is solved. Before the publication of his Ars Conjectandi, Bernoulli had produced a number of treaties related to probability: 1685.
Ars Conjectandi
–
Christiaan Huygens published the first treaties on probability
Ars Conjectandi
–
The cover page of Ars Conjectandi
Ars Conjectandi
–
Portrait of Jakob Bernoulli in 1687
Ars Conjectandi
–
Abraham de Moivre's work was built in part on Bernoulli's
48.
Abraham de Moivre
–
De Moivre was a friend of Isaac Newton, James Stirling. Even though he faced religious persecution De Moivre remained a "steadfast Christian" throughout his life. Among his fellow Huguenot exiles in England, De Moivre was a colleague of translator Pierre des Maizeaux. He wrote The Doctrine of Chances, said to have been prized by gamblers. He first discovered the closed-form expression for Fibonacci numbers linking the nth power of the golden ratio φ to the nth Fibonacci number. De Moivre also was the first to postulate a cornerstone of probability theory. Abraham de Moivre was born on May 26, 1667. Daniel de Moivre, was a surgeon who believed in the value of education. When he was eleven, his parents sent him at Sedan where he spent four years studying Greek under Jacques du Rondel. The Protestant Academy of Sedan had been founded at the initiative of Françoise de Bourbon, the widow of Henri-Robert de la Marck. De Moivre enrolled to study logic at Saumur for two years. In 1684, de Moivre moved to Paris for the first time had formal mathematics training with private lessons from Jacques Ozanam. It required that all children be baptized by Catholic priests. He was sent to a school that the authorities sent Protestant children to for indoctrination into Catholicism. By the time he arrived in London, de Moivre was a competent mathematician with a good knowledge of many of the standard texts.
Abraham de Moivre
–
Abraham de Moivre
Abraham de Moivre
–
Doctrine of chances, 1761
49.
Ian Hacking
–
Ian MacDougall Hacking, born February 18, 1936, is a Canadian philosopher specializing in the philosophy of science. Hacking also earned his PhD at Cambridge, under the direction of Casimir Lewy, a former student of Ludwig Wittgenstein. Hacking became a lecturer at Cambridge before shifting to Stanford University in 1974. After teaching at Stanford, Hacking spent a year at the Center for Interdisciplinary Research in Bielefeld, Germany, from 1982 to 1983. From 2000 to 2006, Hacking held History of Scientific Concepts at the Collège de France. He is the first Anglophone to be elected to a permanent chair in the Collège's history. After retiring from the Collège de France, he was a Professor of Philosophy from 2008 to 2010. Influenced by debates involving Thomas Kuhn, Imre Lakatos, others, he is known for bringing a historical approach to the philosophy of science. The fourth edition of the 50th anniversary edition of Kuhn's The Structure of Scientific Revolutions include an Introduction by Hacking. Hacking himself still identifies as a Cambridge analytic philosopher. He has been a main proponent of a realism about science called "realism." This form of realism encourages a realistic stance towards skepticism towards scientific theories. He has also been influential in directing attention to their relative autonomy from theory. Because of this, he moved philosophical thinking a step further than the initial heavily theory-focused, turn of Kuhn and others. After 1990, he shifted his focus to the human sciences, partly under the influence of the work of Michel Foucault.
Ian Hacking
–
Hacking at the 32nd International Wittgenstein Symposium in 2009
50.
Roger Cotes
–
He also invented the quadrature formulas known as Newton–Cotes formulas and first introduced what is known today as Euler's formula. Cotes was the first Plumian Professor at Cambridge University from 1707 until his death. He was born in Burbage, Leicestershire. His parents were Robert, his wife Grace née Farmer. Roger had an elder brother, a younger sister, Susanna. At first Roger attended Leicester School where his mathematical talent was recognised. Smith took on the role of tutor to encourage Roger's talent. Robert Smith, would become a close associate of Roger Cotes throughout his life. He later entered Trinity College, Cambridge in 1699. Cotes graduated BA in 1706. Roger Cotes's contributions to computational methods lie heavily in the fields of astronomy and mathematics. He began his educational career on astronomy. At age 26 he became the first Plumian Professor of Astronomy and Experimental Philosophy. On his appointment to professor, Cotes opened a list in an effort to provide an observatory for Trinity. Unfortunately, the observatory still was unfinished when Cotes was demolished in 1797.
Roger Cotes
–
This bust was commissioned by Robert Smith and sculpted posthumously by Peter Scheemakers in 1758.
51.
Thomas Simpson
–
Thomas Simpson FRS was a British mathematician, inventor and eponym of Simpson's rule to approximate definite integrals. Simpson was born in Leicestershire. The son of Simpson taught himself mathematics. At the age of nineteen, he married a old widow with two children. As a youth he became interested in astrology after seeing a solar eclipse. He also caused fits in a girl after ` raising a devil' from her. After this incident, his wife had to flee to Derby. From 1743, he taught mathematics at Woolwich. Simpson was a fellow of the Royal Society. In 1758, Simpson was elected a foreign member of the Royal Swedish Academy of Sciences. He was laid to rest in Sutton Cheney. A plaque inside the church commemorates him. In both works, Simpson did not claim originality beyond the presentation of some more accurate data. Who Doubtless Hath Solved the Same Otherwise, Philosophical Transactions of the Royal Society of 6, pp. 2093 -- 2096. This type of generalization was later popularized by Alfred Weber in 1909.
Thomas Simpson
–
Miscellaneous tracts, 1768
52.
Pierre-Simon Laplace
–
Pierre-Simon, marquis de Laplace was an influential French scholar whose work was important to the development of mathematics, statistics, physics and astronomy. He summarized and extended the work of his predecessors in his five-volume Mécanique Céleste. This work translated the geometric study of classical mechanics to one based on calculus, opening up a broader range of problems. In statistics, the Bayesian interpretation of probability was developed mainly by Laplace. The Laplacian differential operator, widely used in mathematics, is also named after him. Laplace is remembered as one of the greatest scientists of all time. Laplace was named a marquis after the Restoration. Laplace was born in Beaumont-en-Auge, Normandy on 23 March 1749 at Beaumont-en-Auge, a village four miles west of Pont l'Eveque in Normandy. According to W. W. Rouse Ball, His father, Pierre de Laplace, owned and farmed the small estates of Maarquis. His great-uncle, Maitre Oliver de Laplace, had held the title of Chirurgien Royal. It was here that Laplace was educated and was provisionally a professor. It was here he wrote his first paper published in the Mélanges of the Royal Society of Turin, Tome iv. 1766–1769, at least two years before he went at 22 or 23 to Paris in 1771. Thus before he was 20 he was in touch with Lagrange in Turin. He did not go to Paris a self-taught lad with only a background!
Pierre-Simon Laplace
–
Pierre-Simon Laplace (1749–1827). Posthumous portrait by Jean-Baptiste Paulin Guérin, 1838.
Pierre-Simon Laplace
–
Laplace's house at Arcueil.
Pierre-Simon Laplace
–
Laplace.
Pierre-Simon Laplace
–
Tomb of Pierre-Simon Laplace
53.
Daniel Bernoulli
–
Daniel Bernoulli FRS was a Swiss mathematician and physicist and was one of the many prominent mathematicians in the Bernoulli family. He is particularly remembered for his applications of mathematics for his pioneering work in probability and statistics. Daniel Bernoulli was born in Groningen, into a family of distinguished mathematicians. The Bernoulli family emigrated to escape the Spanish persecution of the Huguenots. After a brief period in Frankfurt the family moved in Switzerland. Daniel was the son of nephew of Jacob Bernoulli. He had two brothers, Johann II. Daniel Bernoulli was described by W. W. Rouse Ball as "by far the ablest of the younger Bernoullis". He is said to have had a bad relationship with his father. Johann Bernoulli also plagiarized some key ideas in his own book Hydraulica which he backdated to before Hydrodynamica. Despite Daniel's attempts at reconciliation, his father carried the grudge until his death. Around age, his father, Johann, encouraged him to study business, there being poor rewards awaiting a mathematician. However, Daniel refused, because he wanted to study mathematics. He later studied business. Daniel earned a PhD in anatomy and botany in 1721.
Daniel Bernoulli
–
Daniel Bernoulli
54.
Adrien-Marie Legendre
–
Adrien-Marie Legendre was a French mathematician. Legendre made numerous contributions to mathematics. Important concepts such as the Legendre polynomials and Legendre transformation are named after him. Adrien-Marie Legendre was born in Paris on 18 September 1752 to a wealthy family. He defended his thesis in physics and mathematics in 1770. He taught at the École Militaire in Paris from 1795. At the same time, he was associated with the Bureau des Longitudes. In 1782, the Berlin Academy awarded a prize for his treatise on projectiles in resistant media. This treatise also brought him to the attention of Lagrange. The Académie des Sciences made an adjoint member in 1783 and an associé in 1785. In 1789 he was elected a Fellow of the Royal Society. He assisted with the Anglo-French Survey to calculate the precise distance between the Royal Greenwich Observatory by means of trigonometry. To this end in 1787 he visited Dover and London together with Dominique, comte de Cassini and Pierre Méchain. The three also visited the discoverer of the planet Uranus. Legendre lost his private fortune during the French Revolution.
Adrien-Marie Legendre
–
1820 watercolor caricature of Adrien-Marie Legendre by French artist Julien-Leopold Boilly (see portrait debacle), the only existing portrait known
Adrien-Marie Legendre
–
1820 watercolor caricatures of the French mathematicians Adrien-Marie Legendre (left) and Joseph Fourier (right) by French artist Julien-Leopold Boilly, watercolor portrait numbers 29 and 30 of Album de 73 portraits-charge aquarellés des membres de I’Institut.
Adrien-Marie Legendre
–
Side view sketching of French politician Louis Legendre (1752–1797), whose portrait has been mistakenly used, for nearly 200 years, to represent French mathematician Adrien-Marie Legendre, i.e. up until 2005 when the mistake was discovered.
55.
Method of least squares
–
"Least squares" means that the overall solution minimizes the sum of the squares of the errors made in the results of every single equation. The most important application is in data fitting. The least-squares problem occurs in statistical regression analysis; it has a closed-form solution. When the observations come from mild conditions are satisfied, least-squares estimates and maximum-likelihood estimates are identical. The method of least squares can also be derived as a method of moments estimator. The use of least-squares is valid and practical for more general families of functions. Also, by iteratively applying quadratic approximation to the likelihood, the least-squares method may be used to fit a generalized linear model. For the topic of approximating a function by a sum of others using an objective function based on squared distances, see least squares. It was first published by Adrien-Marie Legendre. The combination of different observations taken under the same conditions contrary to simply trying one's best to record a single observation accurately. The approach was known as the method of averages. The combination of different observations taken under different conditions. The method came to be known as the method of least absolute deviation. The development of a criterion that can be evaluated to determine when the solution with the minimum error has been achieved. Laplace tried to define a method of estimation that minimizes the error of estimation.
Method of least squares
–
Carl Friedrich Gauss
56.
Robert Adrain
–
Robert Adrain was an Irish mathematician, whose career was spent in the USA. He was considered one of the most brilliant mathematical minds of the time during a period when few academics conducted original research. He is chiefly remembered for his formulation of the method of least squares. He taught mathematics in the United States. He was president of the York County Academy in York, Pennsylvania, from 1801 to 1805. He is chiefly remembered for his formulation of the method of least squares, published in 1808. Adrain certainly did not know of the work of C.F. Gauss on least squares, although it is possible that he had read A.M. Legendre's article on the topic. Adrain was contributor to the Mathematical Correspondent, the first mathematical journal in the United States. He was elected a Fellow of the American Academy of Arts and Sciences in 1813. In 1825 he founded a somewhat more successful publication targeting The Mathematical Diary, published through 1832. Adrain was the father of Congressman Garnett B. Adrain. Robert Adrain died in New Jersey.
Robert Adrain
–
Robert Adrain
57.
John Herschel
–
Sir John Frederick William Herschel, 1st Baronet KH FRS was an English polymath, mathematician, astronomer, chemist, inventor, experimental photographer, who also did valuable botanical work. Herschel was the son of Mary Baldwin and astronomer William Herschel, the father of twelve children. He originated the use of the Julian system in astronomy. Herschel named four moons of Uranus. He was born in the son of Mary Baldwin and William Herschel. Herschel studied shortly as Senior Wrangler in 1813. It was during his time as an undergraduate that he became friends with George Peacock. Herschel started working with his father. Herschel took up astronomy in 1816, building a reflecting telescope in diameter and with a 20-foot focal length. Between 1823 Herschel re-examined, with James South, the double stars catalogued by his father. Herschel was one of the founders of the Royal Astronomical Society in 1820. He was made a Knight of the Royal Guelphic Order in 1831. Herschel served as President of the Royal Astronomical Society three times: 1827 -- 29, 1847 -- 49. A complementary volume was published posthumously, as the General Catalogue of 10,300 Multiple and Double Stars. His views were published in an article entitled the Encyclopædia Metropolitana in 1845.
John Herschel
–
1867 photograph by Julia Margaret Cameron
John Herschel
–
Calotype (of model) of lunar crater Copernicus, 1842
John Herschel
–
Disa cornuta (L.) Sw. by Margaret & John Herschel
John Herschel
–
Portrait of Sir John Herschel by his daughter Margaret Louisa Herschel
58.
Carl Friedrich Gauss
–
Carl Friedrich Gauss was born on 30 April 1777 in Brunswick, as the son of poor working-class parents. He was confirmed in a church near the school he attended as a child. Gauss was a prodigy. A contested story relates that, when he was eight, he figured out how to add up all the numbers from 1 to 100. He made his first ground-breaking mathematical discoveries while still a teenager. He completed his magnum opus, in 1798 at the age of 21, though it was not published until 1801. This work has shaped the field to the present day. While at university, Gauss independently rediscovered important theorems. Gauss was so pleased by this result that he requested that a regular heptadecagon be inscribed on his tombstone. The stonemason declined, stating that the difficult construction would essentially look like a circle. The 1796 was most productive for both Gauss and number theory. He discovered a construction of the heptadecagon on 30 March. He further advanced modular arithmetic, greatly simplifying manipulations in theory. On April he became the first to prove the quadratic reciprocity law. This remarkably general law allows mathematicians to determine the solvability of any quadratic equation in modular arithmetic.
Carl Friedrich Gauss
–
Carl Friedrich Gauß (1777–1855), painted by Christian Albrecht Jensen
Carl Friedrich Gauss
–
Statue of Gauss at his birthplace, Brunswick
Carl Friedrich Gauss
–
Title page of Gauss's Disquisitiones Arithmeticae
Carl Friedrich Gauss
–
Gauss's portrait published in Astronomische Nachrichten 1828
59.
Friedrich Bessel
–
Friedrich Wilhelm Bessel was a German astronomer, mathematician, physicist and geodesist. He was the first astronomer who determined reliable values by the method of parallax. A special type of mathematical functions were named Bessel functions after Bessel's death, though they had originally been discovered by Daniel Bernoulli. Bessel was born as second son of a civil servant. He was born into a large family in Germany. At the age of 14 Bessel was apprenticed to the import-export Kulenkamp at Bremen. The business's reliance on cargo ships led him to turn his mathematical skills to problems in navigation. This in turn led as a way of determining longitude. Two years later Bessel became Johann Hieronymus Schröter's assistant at Lilienthal Observatory near Bremen. There he worked on James Bradley's stellar observations to produce precise positions for some 3,222 stars. At the age of 25, Bessel was appointed director of the newly founded Königsberg Observatory by King Frederick William III of Prussia. On the recommendation of physicist Carl Friedrich Gauss he was awarded an honorary doctor degree from the University of Göttingen in March 1811. Around that time, the two men engaged in an epistolary correspondence. However, when they met in 1825, they quarrelled; the details are not known. Bessel's close companion and colleague, was married to Johanna Hagen's sister Florentine.
Friedrich Bessel
–
C. A. Jensen, Friedrich Wilhelm Bessel, 1839 (Ny Carlsberg Glyptotek)
60.
W. F. Donkin
–
William Fishburn Donkin FRS was an astronomer and mathematician, Savilian Professor of Astronomy at the University of Oxford. He was born on 15 February 1814. His parents were Alice née Bateman. Two of his uncles were Thomas Bateman. He was educated at St Peter's School, York, in 1832 entered St Edmund Hall, Oxford. He proceeded M.A. 1839. He continued for about six years at St Edmund Hall in the capacity of mathematical lecturer. Afterwards he was elected a Fellow of the Royal Society, also of the Royal Astronomical Society. In 1844, he married the third daughter of the Rev. John Hawtrey of Guernsey. Donkin's poor health compelled him to live abroad during the latter part of his life. He died November 1869. There is a list of his papers, sixteen in number, in the Catalogue of Scientific Papers published by the Royal Society. In 1861, he read a paper on The Secular Acceleration of the Moon's Mean Motion. Donkin was also a contributor to the Philosophical Magazine. In June 1850 he explained the algebra of quaternions and rotation.
W. F. Donkin
–
William F. Donkin (US Naval Observatory Library)
61.
Augustus De Morgan
–
Augustus De Morgan was a British mathematician and logician. He introduced making its idea rigorous. Augustus De Morgan was born in Madurai, India in 1806. His father was Lieut.-Colonel John De Morgan, who held various appointments in the service of the East India Company. His mother, Elizabeth Dodson descended from James Dodson, who computed a table of anti-logarithms, the numbers corresponding to exact logarithms. Augustus De Morgan became blind in one eye a month or two after he was born. The family moved to England when Augustus was seven months old. When De Morgan was ten years old, his father died. Mrs. De Morgan resided at various places in the southwest of England, her son received his elementary education at various schools of no great account. She gave an initiation into demonstration. He received his secondary education from a fellow of Oxford, who appreciated classics better than mathematics. He became an atheist. I shall use the Anti-Deism to signify the opinion that there does not exist a Creator who sustains the Universe. His tutor was FRS. At college he played the flute for recreation and was prominent in the musical clubs.
Augustus De Morgan
–
Augustus De Morgan (1806-1871)
Augustus De Morgan
–
Augustus De Morgan.
62.
James Whitbread Lee Glaisher
–
James Whitbread Lee Glaisher FRS FRSE FRAS, son of James Glaisher, the meteorologist, was a prolific English mathematician and astronomer. He was born in Lewisham in Kent on 5 November 1848 his wife, Cecilia Louisa Belville. His mother was a female photographer. He was educated at St Paul's School from 1858. He became somewhat of a celebrity in 1861 when he made two hot-air balloon ascents with his father to study the stratosphere. He published widely over other fields of mathematics. Glaisher was elected FRS in 1875. He was the editor-in-chief of Messenger of Mathematics. He was also the'tutor' of the philosopher Ludwig Wittgenstein. He was president of the Royal Astronomical Society 1886 -- 1901 -- 1903. When George Biddell Airy retired in 1881 it is said that Glaisher was offered the post but declined. He lived on campus at Cambridge University. He died in his lodgings there on 7 December 1928. He was a keen cyclist but preferred his penny-farthing to the newer "safety" bicycles. He was President of Cambridge University Cycling Club 1882 to 1885.
James Whitbread Lee Glaisher
–
James Whitbread Lee Glaisher.
63.
Giovanni Schiaparelli
–
Giovanni Virginio Schiaparelli was an Italian astronomer and science historian. He was educated at the University of Turin, later studied under Encke. In 1859 -- 1860 he then worked for over forty years at Brera Observatory in Milan. Elsa Schiaparelli was his niece. Among Schiaparelli's contributions are his telescopic observations of Mars. In his initial observations, he named the "seas" and "continents" of Mars. While the term "canals" indicates an artificial construction, the term "channels" connotes that the observed features were natural configurations of the planetary surface. He proved, for example, that the orbit of the Leonid shower coincided with that of the comet Tempel-Tuttle. These observations led the astronomer to formulate the hypothesis, subsequently proved to be correct, that the meteor showers could be the trails of comets. He was also a keen observer of Mercury and Venus, of which he made several drawings and determined their rotation periods. Only in 1965, it was shown that most other subsequent measurements of Mercury's period were incorrect. Schiaparelli was a scholar of the history of classical astronomy. Lalande Prize Gold Medal of the Royal Astronomical Society Bruce Medal The main-belt asteroid 4062 Schiaparelli, named on 15 September 1989. The lunar crater Schiaparelli The Martian crater Schiaparelli Schiaparelli Dorsum on Mercury The 2016 ExoMars' Schiaparelli lander. Elsa Schiaparelli, became a noted designer or maker of haute couture.
Giovanni Schiaparelli
–
Giovanni Schiaparelli
Giovanni Schiaparelli
–
Schiaparelli's grave at the Monumental Cemetery of Milan, Italy
64.
Christian August Friedrich Peters
–
Christian August Friedrich Peters was a German astronomer. He was the father of astronomer Carl Friedrich Wilhelm Peters. He was died in Kiel. Although he did not attend secondary school regularly, he obtained a good knowledge of mathematics and astronomy. In 1826 he became assistant at Altona Observatory. Peters did a PhD under Friedrich Bessel at the University of Königsberg. In 1834 he became an assistant in 1839 joined the staff of Pulkovo Observatory. In 1849 he became professor of astronomy at Königsberg and soon after succeeded Friedrich Wilhelm Bessel as director of the observatory there. In 1854 he became editor of the Astronomische Nachrichten. Peters edited the journal for the rest of his life, being responsible for 58 volumes of the journal. He moved there and continued in his post. In 1866, he was elected a foreign member of the Royal Swedish Academy of Sciences. Peters won the Gold Medal of the Royal Astronomical Society in 1852. Numerus constans nutationis ex ascensionibus rectis stellae polaris in specula Dorpatensi annis 1822 ad 1838 observatis deductus. Resultate aus Beobachtungen des Polarsterns am Ertelschen Vertikalkreise.
Christian August Friedrich Peters
–
Peters Christian August Friedrich
65.
Sylvestre Lacroix
–
Sylvestre François Lacroix was a French mathematician. He was born in Paris, was raised in a poor family who still managed to obtain a good education for their son. Lacroix's path to mathematics started with the novel Robinson Crusoe. That gave him an interest in sailing and thus navigation too. At that geometry captured the rest of mathematics followed. He had courses with Antoine-René Mauduit at College Royale de France and Joseph-Francois Marie at Collége Mazaine of University of Paris. In 1779 he began to calculate the variables of theory. The next year he followed some lectures of Gaspard Monge. In 1782 at the age of 17 he became an instructor in mathematics at the École de Gardes de la Marine in Rochefort. Monge was the students' examiner and Lacroix's supervisor there until 1795. Returning to Paris, Condorcet hired Lacroix to fill in for him as instructor of gentlemen at a Paris lycée. In 1787 he began to teach at École Royale Militaire de Paris and he married Marie Nicole Sophie Arcambal. From 1788, he taught courses under examiner Pierre-Simon Laplace. The posting in Besançon lasted until 1793 when Lacroix returned to Paris. It was the worst of times: Lavoisier had opened inquiry into a subject Lacroix studied with Jean Henri Hassenfratz.
Sylvestre Lacroix
–
Sylvestre François Lacroix
66.
Adolphe Quetelet
–
Lambert Adolphe Jacques Quetelet ForMemRS was a Belgian astronomer, mathematician, statistician and sociologist. He was influential in introducing statistical methods to the social sciences. His name is sometimes spelled with an accent as Quételet. He developed the body mass scale. Adolphe was born in the son of François-Augustin-Jacques-Henri Quetelet, a Frenchman and Anne Françoise Vandervelde, a Flemish woman. In that capacity, he traveled on the Continent particularly spending time in Italy. Francois died when Adolphe was only seven years old. Adolphe studied at the Ghent lycée, where he started teaching mathematics at the age of 19. In the same year he completed his dissertation. Quetelet received a doctorate from the University of Ghent. Thereafter, the young man set out to convince government officials and private donors to build an astronomical observatory in Brussels; he succeeded in 1828. He became a member of the Royal Academy in 1820. He lectured for sciences and letters and at the Belgian Military School. In 1825 he became correspondent of the Royal Institute of the Netherlands, in 1827 he became member. When it became Royal Netherlands Academy of Arts and Sciences he became foreign member.
Adolphe Quetelet
–
Adolphe Quetelet
67.
Richard Dedekind
–
Julius Wilhelm Richard Dedekind was a German mathematician who made important contributions to abstract algebra, algebraic number theory and the definition of the real numbers. Dedekind's father was an administrator of Collegium Carolinum in Braunschweig. He had three older siblings. As an adult, Dedekind never used Julius Wilhelm. Dedekind was born, died in Braunschweig. Dedekind first attended the Collegium Carolinum in 1848 before transferring in 1850. There, he was taught theory by professor Moritz Stern. Dedekind became his last student. He received his doctorate for a thesis titled Über die Theorie der Eulerschen Integrale. This thesis did not display the talent evident by Dedekind's subsequent publications. At that time, not Göttingen, was the main facility for mathematical research in Germany. Thus Dedekind went for two years of study where he and Bernhard Riemann were contemporaries; they were both awarded the habilitation in 1854. He returned giving courses on probability and geometry. They became good friends. Of lingering weaknesses in his mathematical knowledge, Dedekind studied elliptic and abelian functions.
Richard Dedekind
–
Richard Dedekind
Richard Dedekind
–
East German stamp from 1981, commemorating Richard Dedekind.
68.
Hermann Laurent
–
Paul Matthieu Hermann Laurent was a French mathematician. Despite his large body of works, Laurent series expansions for complex functions were not after Pierre Alphonse Laurent. 1862. Traité d'analyse. 1885. Élimination. 1900. Sur les principes fondamentaux de la théorie des nombres et la géométrie. 1902. La Géométrie analytique générale. 1906. Statistique mathématique. 1908.
Hermann Laurent
–
Paul Matthieu Hermann Laurent
69.
Karl Pearson
–
Karl Pearson FRS was an influential English mathematician and biostatistician. He has been credited with establishing the discipline of mathematical statistics. Pearson was also a biographer of Sir Francis Galton. Pearson had two siblings, Arthur and Amy. He then travelled to Germany to study physics at the University of Heidelberg under Kuno Fischer. He next visited the University of Berlin, where he attended the lectures of the famous physiologist Emil du Bois-Reymond on Darwinism. Pearson also studied Roman Law, taught by Bruns and Mommsen, Socialism. He spent much of the 1880s in Berlin, Heidelberg, Vienna, Saig bei Lenzkirch, Brixlegg. He was a founder of the Men and Women's Club. Pearson was offered a Germanics post at Cambridge. Comparing Cambridge students to those he knew from Germany, Karl found German students weak. Now I think it can not be too highly valued." Have you ever attempted to conceive all there is in the world worth knowing—that not one subject in the universe is unworthy of study? Mankind seems on the verge of a glorious discovery. What Newton did to simplify the planetary motions must now be done to unite in one whole the various isolated theories of mathematical physics.
Karl Pearson
–
Portrait of Karl Pearson, by Elliott & Fry, 1890.
Karl Pearson
–
Karl Pearson at work, 1910.
70.
George Boole
–
George Boole was an English mathematician, educator, philosopher and logician. Logic is credited with laying the foundations for the age. He was born in Lincoln, Lincolnshire, England, the son of Mary Ann Joyce. Boole had little further formal and academic teaching. William Brooke, a bookseller in Lincoln, may have helped him with Latin, which he may also have learned at the school of Thomas Bainbridge. He was self-taught in modern languages. At age 16 he became the breadwinner for three younger siblings, taking up a junior position in Doncaster at Heigham's School. He taught briefly in Liverpool. Boole participated in the Mechanics Institute, in the Greyfriars, Lincoln, founded in 1833. Without a teacher, it took him many years to master calculus. At age 19, Boole successfully established his own school in Lincoln. Four years later Boole took over Hall's Academy following the death of Robert Hall. In 1840 he moved back to Lincoln, where he ran a boarding school. Boole immediately became involved in the Lincoln Topographical Society, on which he served as a member of the committee. Boole became a prominent local figure, an admirer of John Kaye, the bishop.
George Boole
–
Boole in about 1860
George Boole
–
Boole's House and School at 3 Pottergate in Lincoln
George Boole
–
Plaque from the house in Lincoln
George Boole
–
The house at 5 Grenville Place in Cork, in which Boole lived between 1849 and 1855, and where he wrote The Laws of Thought
71.
Andrey Markov
–
Andrey Andreyevich Markov was a Russian mathematician. Markov is best known for his work on stochastic processes. A primary subject of his research later became known as Markov processes. His younger brother Vladimir Andreevich Markov proved Markov brothers' inequality. Another Andrei Andreevich Markov, was also a notable mathematician, making contributions to constructive mathematics and recursive function theory. Andrey Markov was born on the 14th of June 1856 in Russia. He attend Petersburg Grammar, Markov was seen by a select few of teachers. In his academics Markov performed poorly in most subjects other than mathematics. Later in life Markov was lectured by Pafnuty Chebyshev. Among his teachers were Yulian Sokhotski, Konstantin Posse, Yegor Zolotarev, Pafnuty Chebyshev, Aleksandr Korkin, Mikhail Okatov, Osip Somov, Nikolai Budaev. Markov was later asked if he would like to stay and have a career as a Mathematician. Markov later continued his own mathematical studies. In this time Markov found a practical use for his mathematical skills. Markov figured out that he could use chains to model the alliteration of consonants in Russian literature. Markov also contributed to many mathematical aspects in his time.
Andrey Markov
–
Andrey (Andrei) Andreyevich Markov
Andrey Markov
–
Markov in 1886
Andrey Markov
–
Markov's headstone
72.
Markov chains
–
I.e. conditional on the present state of the system, its future and past are independent. In discrete time, the process is known as a discrete-time Markov chain. In continuous time, the process is known as a Continuous-time Markov chain. It takes values in some state space and the time spent in each state takes non-negative real values and has an exponential distribution. Future behaviour of the model depends only on the current state of the model and not on historical behaviour. Markov chains have many applications as statistical models of real-world processes. A Markov chain is a stochastic process with the Markov property. The system's state space and time parameter index needs to be specified. In addition, there are other extensions of Markov processes that are referred to as such but do not necessarily fall within any of these four categories. Notice that the general state space continuous-time Markov chain is general to such a degree that it has no designated term. However, many applications of Markov chains employ finite or countably infinite state spaces, which have a more straightforward statistical analysis. Besides state-space parameters, there are generalisations. For simplicity, most of this article concentrates on the discrete-time, discrete state-space case, unless mentioned otherwise. The changes of state of the system are called transitions. The probabilities associated with various state changes are called transition probabilities.
Markov chains
–
Russian mathematician Andrey Markov.
Markov chains
–
A simple two-state Markov chain
73.
Stochastic process
–
A stochastic process is a probability model used to describe phenomena that evolve over time or space. This is the probabilistic counterpart to a deterministic process. In the simple case of discrete time, as opposed to continuous time, a stochastic process is a sequence of random variables. One approach may be to model these random variables as random functions of several deterministic arguments. Although the random values of a stochastic process at different times may be random variables, in most commonly considered situations they exhibit complicated statistical dependence. The random field, is defined by letting the variables be parametrized by members of a topological space instead of time. Examples of random fields include static images, random terrain, composition variations of a heterogeneous material. That is, a stochastic process X is a collection where each X t is an random variable on Ω. The space S is then called the space of the process. Let X be an stochastic process. For every finite sequence T ′ = ∈ T k, the k-tuple X T ′ = is a random variable taking values in S k. The distribution P T ′ = P of this random variable is a measure on S k. This is called a finite-dimensional distribution of X. Under suitable topological restrictions, a suitably "consistent" collection of finite-dimensional distributions can be used to define a stochastic process. Stochastic processes were first studied rigorously in the 19th century to aid in understanding financial markets and Brownian motion.
Stochastic process
–
Stock market fluctuations have been modeled by stochastic processes.
74.
Andrey Kolmogorov
–
Andrey Kolmogorov was born in 1903. His unmarried mother, Maria Y. Kolmogorova, died giving birth to him. Andrey was raised by two of his aunts at the estate of his grandfather, a well-to-do nobleman. Little is known about Andrey's father. He had been an agronomist. Nikolai had been exiled after his participation in the revolutionary movement against the czars. He was presumed to have been killed in the Russian Civil War. Andrey was the "editor" of the mathematical section of this journal. In 1910, they moved to Moscow, where he graduated from high school in 1920. Later Kolmogorov began to study at the Moscow State University and at the same time Mendeleev Moscow Institute of Chemistry and Technology. Kolmogorov writes about this time: "I arrived at Moscow University with a fair knowledge of mathematics. I knew in particular the beginning of theory. I studied many questions in articles of Brockhaus and Efron filling out for myself what was presented too concisely in these articles." Kolmogorov gained a reputation for his wide-ranging erudition.
Andrey Kolmogorov
–
Andrey Kolmogorov
Andrey Kolmogorov
–
Kolmogorov (left) delivers a talk at a Soviet information theory symposium. (Tallinn, 1973).
Andrey Kolmogorov
–
Kolmogorov works on his talk (Tallinn, 1973).
75.
Artemas Martin
–
Artemas Martin was a self-educated American mathematician. Martin grew up in Venango County, Pennsylvania. He worked as a farmer, schoolteacher. In 1881, he declined an invitation to become a professor of mathematics at the Normal School in Missouri. He died on November 1918. From 1870 to 1875, he was editor of Clark's School Visitor, one of the magazines to which he had previously contributed. From 1875 to 1876 Martin moved to the Normal Monthly, where he published 16 articles on diophantine analysis. He subsequently became editor of the Mathematical Visitor of the Mathematical Magazine in 1882. Martin maintained an mathematical library, now in the collections of American University. In 1877 Martin was given an honorary M.A. from Yale University. He was also a member of the American Mathematical Society, the Circolo Matematico di Palermo, the Deutsche Mathematiker-Vereinigung.
Artemas Martin
–
Artemas Martin (US Naval Observatory)
76.
History of statistics
–
The history of statistics can be said to start around 1749 although, over time, there have been changes to the interpretation of the word statistics. In early times, the meaning was restricted to information about states. In modern terms, "statistics" means both sets of collected information, in national accounts and temperature records, analytical work which requires statistical inference. A number of statistical concepts have an important impact on a wide range of sciences. By the 18th century, the term "statistics" designated the systematic collection of economic data by states. For at least two millennia, these data were mainly tabulations of material resources that might be taxed or put to military use. In the early 19th century, the meaning of "statistics" broadened to include the discipline concerned with the collection, summary, analysis of data. Statistics are computed and widely distributed in government, business, most of the sciences and sports, even for many pastimes. Electronic computers have expedited more statistical computation even as they have facilitated the collection and aggregation of data. A single data analyst may have available a set of data-files with millions of records, hundreds of separate measurements. These were collected over time from computer activity or so on. The term "mathematical statistics" designates the mathematical theories of probability and statistical inference, which are used in statistical practice. The relation between statistics and theory developed rather late, however. By 1800, astronomy used statistical theories, particularly the method of least squares. Much of the theoretical work was readily available by the time computers were available to exploit them.
History of statistics
History of statistics
–
Sir William Petty, a 17th-century economist who used early statistical methods to analyse demographic data.
History of statistics
–
Carl Friedrich Gauss, mathematician who developed the method of least squares in 1809.
History of statistics
–
Karl Pearson, the founder of mathematical statistics.
77.
Kolmogorov
–
Andrey Kolmogorov was born in 1903. His unmarried mother, Maria Y. Kolmogorova, died giving birth to him. Andrey was raised by two of his aunts at the estate of his grandfather, a well-to-do nobleman. Little is known about Andrey's father. He had been an agronomist. Nikolai had been exiled after his participation in the revolutionary movement against the czars. He was presumed to have been killed in the Russian Civil War. Andrey was the "editor" of the mathematical section of this journal. In 1910, they moved to Moscow, where he graduated from high school in 1920. Later Kolmogorov began to study at the Moscow State University and at the same time Mendeleev Moscow Institute of Chemistry and Technology. Kolmogorov writes about this time: "I arrived at Moscow University with a fair knowledge of mathematics. I knew in particular the beginning of theory. I studied many questions in articles of Brockhaus and Efron filling out for myself what was presented too concisely in these articles." Kolmogorov gained a reputation for his wide-ranging erudition.
Kolmogorov
–
Andrey Kolmogorov
Kolmogorov
–
Kolmogorov (left) delivers a talk at a Soviet information theory symposium. (Tallinn, 1973).
Kolmogorov
–
Kolmogorov works on his talk (Tallinn, 1973).
78.
Set (mathematics)
–
In mathematics, a set is a well-defined collection of distinct objects, considered as an object in its own right. Sets are one of the most fundamental concepts in mathematics. The German word Menge, rendered as "set" in English, was coined by Bernard Bolzano in his work The Paradoxes of the Infinite. A set is a well-defined collection of distinct objects. The objects that make up a set can be anything: other sets, so on. Sets are conventionally denoted with capital letters. Sets A and B are equal if and only if they have precisely the same elements. There are two ways of describing, or specifying the members of, a set. One way is by intensional definition, using a rule or semantic description: A is the set whose members are the first four positive integers. B is the set of colors of the French flag. The second way is by extension –, listing each member of the set. An extensional definition is denoted by enclosing the list of members in curly brackets: C = D =. One often has the choice of specifying a set either intensionally or extensionally. In the examples above, for instance, A = C and B = D. There are two important points to note about sets.
Set (mathematics)
–
A set of polygons in a Venn diagram
79.
Risk
–
Risk is the potential of gaining or losing something of value. Values can be lost when taking risk resulting from a given action or inaction, foreseen or unforeseen. Risk can also be defined as the intentional interaction with uncertainty. Uncertainty is a potential, uncontrollable outcome; risk is a consequence of action taken in spite of uncertainty. Perception is the subjective judgment people make about the severity and probability of a risk, may vary person to person. Some are much riskier than others. The Oxford English Dictionary cites the earliest use of the word in English as the spelling as risk from 1655. It defines risk as: the possibility of loss, other adverse or unwelcome circumstance; a chance or situation involving such a possibility. Risk is an uncertain condition that, if it occurs, has an effect on at least one objective.. . The probability of happening multiplied by the resulting cost or benefit if it does. Finance: The possibility that an actual return on an investment will be lower than the expected return. Insurance: A situation where the probability of a variable is known but when a mode of occurrence or the actual value of the occurrence is not. A risk is not an uncertainty, a hazard. Securities trading: The probability of a loss or drop in value.
Risk
–
Firefighters at work
80.
Market (economics)
–
A market is one of the many varieties of systems, institutions, procedures, social relations and infrastructures whereby parties engage in exchange. While parties may exchange services by barter, most markets rely on sellers offering their goods or services in exchange for money from buyers. It can be said that a market is the process by which the prices of services are established. Markets facilitate enable the distribution and allocation of resources in a society. Markets allow any trade-able item to be priced. Markets can also be worldwide, for the global diamond trade. National economies can be classified, for example as developed markets or developing markets. In mainstream economics, the concept of a market is any structure that allows sellers to exchange any type of goods, services and information. The exchange with or without money, is a transaction. A major topic of debate is how much a given market can be considered to be a "free market", free from intervention. However it is not always clear how the allocation of resources can be improved since there is always the possibility of failure. A market is one of the many varieties of systems, institutions, procedures, infrastructures whereby parties engage in exchange. While parties may exchange services by barter, most markets rely on sellers offering their goods or services from buyers. It can be said that a market is the process by which the prices of goods and services are established. Markets enables the distribution and allocation of resources in a society.
Market (economics)
–
Financial markets
Market (economics)
–
Corn Exchange, in London circa 1809.
Market (economics)
–
A market in Râmnicu Vâlcea by Amedeo Preziosi.
Market (economics)
–
Cabbage market by Václav Malý.
81.
Actuarial science
–
Actuarial science is the discipline that applies mathematical and statistical methods to assess risk in insurance, finance and other industries and professions. Actuaries are professionals who are qualified in this field through intense education and experience. In many countries, actuaries must demonstrate their competence by passing a series of thorough professional examinations. Actuarial science includes a number including mathematics, probability theory, statistics, computer science. Historically, actuarial science used deterministic models in the construction of tables and premiums. Many universities have undergraduate and graduate degree programs in actuarial science. In 2010, a study published by job search website CareerCast ranked actuary as the #1 job in the United States. The study used five key criteria to rank jobs: environment, income, employment outlook, physical demands, stress. These long term coverage required that money be set aside to pay future benefits, such as annuity and death benefits many years into the future. This led to the development of an important actuarial concept, referred to as the Present value of a future sum. Certain aspects of the actuarial methods for discounting pension funds have come under criticism from modern financial economics. These factors underlay the development of the Resource-Base Relative Value Scale at Harvard in a multi-disciplined study. Actuarial science also aids in the design of benefit structures, reimbursement standards, the effects of proposed government standards on the cost of healthcare. It is common with mergers and acquisitions that several pension plans have to be combined or at least administered on an equitable basis. Benefit plans liabilities have to be properly valued, reflecting both earned benefits for past service, the benefits for future service.
Actuarial science
–
2003 US mortality (life) table, Table 1, Page 1
82.
Environmental regulation
–
The broad category of "environmental law" may be broken down into a number of more specific regulatory subjects. While there is no agreed-upon taxonomy, the core environmental law regimes address environmental pollution. Other areas, such as environmental assessment, may not fit neatly into either category, but are nonetheless important components of environmental law. Environmental assessments may be subject to judicial review. Air quality laws govern the emission of air pollutants into the atmosphere. A specialized subset of quality laws regulate the quality of air inside buildings. Air quality laws are often designed specifically to protect human health by eliminating airborne pollutant concentrations. Regulatory efforts include identifying and categorizing air pollutants, dictating necessary or appropriate mitigation technologies. Water quality laws govern the release of pollutants including surface water, ground water, stored drinking water. Some quality laws, such as drinking water regulations, may be designed solely with reference to human health. Regulatory efforts may include identifying and categorizing water pollutants, limiting pollutant discharges from effluent sources. Regulatory areas include control of surface runoff from construction sites and urban environments. Regulatory efforts include mandating transport, treatment, storage, disposal practices. Environmental cleanup laws govern the removal of pollution or contaminants from environmental media such as soil, sediment, ground water. Chemical safety laws govern the use of chemicals in human activities, particularly man-made chemicals in industrial applications.
Environmental regulation
–
Industrial air pollution now regulated by air quality law.
Environmental regulation
–
A typical stormwater outfall, subject to water quality law.
Environmental regulation
–
A municipal landfill, operated pursuant to waste management law.
83.
Financial regulation
–
This may be handled by either non-government organization. Financial regulation has also influenced the structure of banking sectors, by increasing the variety of financial products available. Regulating foreign participation in the financial markets. Acts empower organizations, non-government, to monitor activities and enforce actions. There are various combinations in place for the financial regulatory structure around the global. Leaf parts are in any case: Exchange acts ensure that trading on the exchanges is conducted in a proper manner. Most prominent the pricing process, execution and settlement of trades, direct and efficient trade monitoring. Financial regulators ensure that listed companies and market participants comply under the trading acts. The trading acts demands that listed companies publish regular financial reports, directors' dealings. Whereas market participants are required to Publish major shareholder notifications. Investment acts ensures the frictionless operation of those vehicles. Banking acts lay down rules for banks which they have to observe when they are carrying on their business. These rules are designed to prevent unwelcome developments that might disrupt the smooth functioning of the system. Thus ensuring a strong and efficient system. The following is a short listing of regulatory authorities for a more complete listing, please see list of financial regulatory authorities by country.
Financial regulation
84.
Behavioral finance
–
Risk tolerance is a crucial factor that influences a wide range of financial decisions. Risk tolerance is defined as individuals willingness to engage in a financial activity whose outcome is uncertain. Behavioral economics is primarily concerned with the bounds of rationality of economic agents. Behavioral models typically integrate insights from microeconomic theory; in so doing, these behavioral models cover a range of concepts, methods, fields. The study of behavioral economics includes how market decisions are made and the mechanisms that drive public choice. The use in U.S. scholarly papers has increased in the few years, as shown by a recent study. There are three prevalent themes in finance0 TRUEs: Heuristics: People often make decisions based on approximate rules of not strict logic. Framing: The collection of anecdotes and stereotypes that make up the mental emotional filters individuals rely on to understand and respond to events. Market inefficiencies: These include mis-pricings and non-rational decision making. During the classical period of economics, microeconomics was closely linked to psychology. They developed the concept of homo economicus, whose psychology was fundamentally rational. However, neo-classical economists employed more sophisticated psychological explanations, including Francis Edgeworth, Vilfredo Pareto, Irving Fisher. Economic psychology emerged in the works of Gabriel Tarde, Laszlo Garai. Expected utility and discounted utility models began generating testable hypotheses about intertemporal consumption, respectively. In the 1960s cognitive psychology began to shed more light on the brain as an information processing device.
Behavioral finance
–
Daniel Kahneman, winner of 2002 Nobel prize in economics
Behavioral finance
–
World GDP (PPP) per capita by country (2014)
85.
Groupthink
–
There is loss of individual creativity, uniqueness and independent thinking. The dysfunctional group dynamics of the "ingroup" produces an "illusion of invulnerability". Thus the "ingroup" significantly underrates the abilities of its opponents. Furthermore, groupthink can produce dehumanizing actions against the "outgroup". Antecedent factors such as group cohesiveness, situational context play into the likelihood of whether or not groupthink will impact the decision-making process. Most of the initial research on groupthink was conducted from Yale University. Janis published an influential book in 1972, revised in 1982. Janis used the Japanese attack on Pearl Harbor in 1941 as his two prime case studies. Later studies have reformulated his groupthink model. We are not talking about instinctive conformity -- it is, after all, a perennial failing of mankind. Irving Janis pioneered the initial research on the theory. Groupthink is a term of the same order as the words in the vocabulary George Orwell used in his dismaying world of 1984. In that context, groupthink takes on an invidious connotation. After this study he remained interested in the ways in which people make decisions under external threats. He concluded that in each of these cases, the decisions occurred largely because of groupthink, which prevented contradictory views from being subsequently evaluated.
Groupthink
–
From "Groupthink" by William H. Whyte, Jr. in Fortune magazine, March 1952
86.
Reliability (statistics)
–
Reliability in statistics and psychometrics is the overall consistency of a measure. A measure is said to have a high reliability if it produces similar results under consistent conditions. Scores that are highly reliable are consistent from one occasion to another. That is, if the testing process were repeated with a group of test takers, essentially the same results would be obtained. Various kinds of reliability coefficients, with values ranging between 0.00 and 1.00, are usually used to indicate the amount of error in the scores." For example, measurements of people's height and weight are often extremely reliable. There are several general classes of reliability estimates: Inter-rater reliability assesses the degree of agreement between two or more raters in their appraisals. Test-retest reliability assesses the degree to which test scores are consistent from one test administration to the next. Measurements are gathered from a single rater who uses the same methods or instruments and the same testing conditions. This includes intra-rater reliability. Inter-method reliability assesses the degree to which test scores are consistent when there is a variation in the methods or instruments used. This allows inter-rater reliability to be ruled out. When dealing with forms, it may be termed parallel-forms reliability. Internal consistency reliability, assesses the consistency of results across items within a test. Reliability does not imply validity.
Reliability (statistics)
–
Validity & Reliability
87.
Automobiles
–
A car is a wheeled, self-powered motor vehicle used for transportation and a product of the automotive industry. The year 1886 is regarded as the birth year of the modern car. In that year, German inventor Karl Benz built the Benz Patent-Motorwagen. Cars did not become widely available until the early 20th century. One of the first cars, accessible to the masses was the 1908 Model T, an American car manufactured by the Ford Motor Company. Cars are equipped with controls used for driving, parking, passenger comfort and safety, controlling a variety of lights. Over the decades, additional features and controls have been added to vehicles, making them progressively more complex. Examples include rear reversing cameras, air conditioning, navigation systems, in car entertainment. Most cars in use in the 2010s are propelled by an internal combustion engine, fueled by deflagration of gasoline or diesel. Both fuels cause air pollution and are also blamed for contributing to climate change and global warming. Vehicles using alternative fuels such as ethanol flexible-fuel vehicles and natural gas vehicles are also gaining popularity in some countries. Electric cars, which were invented early in the history of the car, began to become commercially available in 2008. There are costs and benefits to car use. Road traffic accidents are the largest cause of injury-related deaths worldwide. The benefits may include convenience.
Automobiles
–
Benz "Velo" model (1894) by German inventor Carl Benz – entered into an early automobile race as a motocycle
Automobiles
–
A modern car, BMW E90
Automobiles
–
Cugnot's 1771 fardier à vapeur, as preserved at the Musée des Arts et Métiers, Paris
Automobiles
–
Karl Benz, the inventor of the modern car
88.
Natural language processing
–
Natural language processing is a field of computer science, artificial intelligence, computational linguistics concerned with the interactions between computers and human languages. As such, NLP is related to the area of human–computer interaction. Many challenges in NLP involve: natural language enabling computers to derive meaning from human or natural input; and others involve natural language generation. The history of NLP generally starts in the 1950s, although work can be found from earlier periods. In 1950, Alan Turing published an article titled "Computing Machinery and Intelligence" which proposed what is now called the Turing test as a criterion of intelligence. The Georgetown experiment in 1954 involved fully automatic translation of more than sixty Russian sentences into English. The authors claimed that within five years, translation would be a solved problem. Further research in translation was conducted until the late 1980s, when the first statistical machine translation systems were developed. Using almost no information about human thought or emotion, ELIZA sometimes provided a startlingly human-like interaction. During the 1970s many programmers began to write "conceptual ontologies", which structured real-world information into computer-understandable data. Examples are MARGIE, SAM, PAM, TaleSpin, QUALM, Politics, Plot Units. During this time, many chatterbots were written including PARRY, Racter, Jabberwacky. Up to the 1980s, most NLP systems were based on complex sets of hand-written rules. Starting in the late 1980s, however, there was a revolution in NLP with the introduction of machine learning algorithms for language processing. Some of the earliest-used machine learning algorithms, such as decision trees, produced systems of hard if-then rules similar to existing hand-written rules.
Natural language processing
–
An automated online assistant providing customer service on a web page, an example of an application where natural language processing is a major component.
89.
Power set
–
In mathematics, the power set of any set S is the set of all subsets of S, including the empty set and S itself. In axiomatic theory, the existence of the power set of any set is postulated by the axiom of power set. Any subset of P is called a family of sets over S. If S is the set, then the subsets of S are: and hence the set of S is. If S is a finite set with |S| = n elements, then the number of subsets of S is |P| = 2n. This fact, the motivation for the notation 2S, may be demonstrated simply as follows, First, order the elements of S in any manner. We write any subset of S in the format where 1 ≤ i ≤ n, can take the value of 0 or 1. If γi = 1, the i-th element of S is in the subset; otherwise, the i-th element is not in the subset. Clearly the number of distinct subsets that can be constructed this way is 2n as γi ∈. Cantor's diagonal argument shows that the set of a set always has strictly higher cardinality than the set itself. In particular, Cantor's theorem shows that the set of a countably infinite set is uncountably infinite. The set of the set of natural numbers can be put in a one-to-one correspondence with the set of real numbers. In fact, one can show that any finite Boolean algebra is isomorphic to the Boolean algebra of the set of a finite set. Every infinite Boolean algebra can be represented as a subalgebra of a power set Boolean algebra. It can hence be shown that the set considered together with both of these operations forms a Boolean ring.
Power set
–
The elements of the power set of the set { x, y, z } ordered in respect to inclusion.
90.
Function (mathematics)
–
An example is the function that relates each real number x to its square x2. The output of a function f corresponding to an input x is denoted by f. In this example, if the input is 3, we may write f = 9. Likewise, if the input is 3, then the output is also 9, we may write f = 9. The input variable are sometimes referred to as the argument of the function. Functions of various kinds are "the central objects of investigation" in most fields of modern mathematics. There are many ways to describe or represent a function. Some functions may be defined by a formula or algorithm that tells how to compute the output for a given input. Others are given by a picture, called the graph of the function. In science, functions are sometimes defined by a table that gives the outputs for selected inputs. A function could be described implicitly, for example as the inverse as a solution of a equation. In the example above, f = x2, we have the ordered pair. More commonly the word "range" is used to mean, specifically the set of outputs. The image of this function is the set of non-negative real numbers. In analogy with arithmetic, it is possible to define addition, division of functions, in those cases where the output is a number.
Function (mathematics)
–
A function f takes an input x, and returns a single output f (x). One metaphor describes the function as a "machine" or " black box " that for each input returns a corresponding output.
91.
Joint distribution
–
Consider the flip of two fair coins; let A and B be random variables associated with the outcomes first and second coin flips respectively. If a coin displays "heads" then associated random variable is 0 otherwise. The joint probability function of A and B defines probabilities for each pair of outcomes. All possible outcomes are, Since each outcome is equally likely the joint probability mass function becomes P = 1 / 4 when A, B ∈. Since the flips are independent, the joint probability mass function is the product of the marginals: P = P P. In general, the sequence of flips follows a Bernoulli distribution. Let A = 1 if the number is even and A = 0 otherwise. Furthermore, let B = 1 if the number is B = 0 otherwise. These probabilities necessarily sum to 1, since the probability of some combination of A and B occurring is 1. Again, since these are probability distributions, one has ∫ x ∫ y f X, Y d y d x = 1. Formally, fX,Y is the probability density function of with respect to the product measure on the respective supports of X and Y. Two discrete random variables X and Y are independent if the joint probability mass function satisfies P = P ⋅ P for all x and y. Similarly, two absolutely continuous random variables are independent if f X, Y = f X ⋅ f Y for all x and y. Therefore, it can be efficiently represented by the lower-dimensional probability distributions P and P. Conditional independence relations can be represented with a Bayesian network or copula functions.
Joint distribution
–
Many sample observations (black) are shown from a joint probability distribution. The marginal densities are shown as well.
92.
Independence (probability theory)
–
In probability theory, two events are independent, statistically independent, or stochastically independent if the occurrence of one does not affect the probability of the other. Similarly, two random variables are independent if the realization of one does not affect the distribution of the other. Two events A and B are independent if their joint probability equals the product of their probabilities: P = P P. Thus, the occurrence of B does not affect the probability of A, vice versa. Furthermore, the preferred definition makes clear by symmetry that when A is independent of B, B is also independent of A. This is called the rule for independent events. For more than two events, a mutually independent set of events is pairwise independent; but the converse is not necessarily true. A set of random variables is pairwise only if every pair of random variables is independent. The measure-theoretically inclined may prefer to substitute events for events in the above definition, where A is any Borel set. That definition is exactly equivalent to the one above when the values of the random variables are real numbers. It has the advantage of working also for random variables taking values in any measurable space. They are conditionally independent given Z. The formal definition of conditional independence is based on the idea of conditional distributions. If X and Y are conditionally independent given Z, then P = P for any x, z with P > 0. That is, the conditional distribution for X given Y and Z is the same as that given Z alone.
Independence (probability theory)
–
Pairwise independent, but not mutually independent, events.
93.
Dice
–
Dice are small throwable objects with multiple resting positions, used for generating random numbers. Dice are also used in non-gambling tabletop games. A traditional die is a cube, with each of its six faces showing a different number of dots from 1 to 6. When rolled, the die comes to rest showing on its upper surface a random integer from one to six, each value being equally likely. They may be used to produce results other than one through six. Crooked dice are designed to favor some results over others for purposes of cheating or amusement. Dice likely originated in the ancient Middle East. One of the oldest known games was excavated from a Mesopotamian tomb, dating to the 24th century BC. British archaeologist Leonard Woolley discovered the dice in the Royal Cemetery with a board game known as the Royal Game of Ur. Stick dice and tetrahedral dice, were found with the board game. Unlike modern dice, the numbers on the opposite sides of Mesopotamian dice were consecutive numbers rather than numbers that add up to seven. The Egyptian game of Senet was played with dice. Senet was played before 3000 BC and up to the 2nd century AD. There is no scholarly consensus on the rules of Senet. Dicing is mentioned as an Indian game in the Rigveda, the early Buddhist games list.
Dice
–
Four differently colored traditional dice showing all six different sides
Dice
–
The Royal Game of Ur, a Mesopotamian board game played with dice
Dice
–
Bone die found at Cantonment Clinch (1823–1834), an American fort used in the American Civil War by both Confederate and Union forces at separate times
Dice
–
A collection of historical dice from various regions of Asia
94.
Conditional probability
–
In probability theory, conditional probability is a measure of the probability of an event given that another event has occurred. For example, the probability that any given person has a cough on any given day may be only 5%. But if we assume that the person has a cold, then they are much more likely to be coughing. The conditional probability of coughing given that you have a cold might be a much higher 75%. The concept of conditional probability is one of the most fundamental and one of the most important concepts in probability theory. But conditional probabilities can require careful interpretation. For example, there need not be a causal or temporal relationship between A and B. P may or may not be equal to P. Also, in general, P is not equal to P. For example, if you have cancer you might have a 90% chance of testing positive for cancer. Alternatively, you may have only a 10 % chance of actually having cancer because cancer is very rare. Falsely equating the two probabilities causes various errors of reasoning such as the base rate fallacy. Conditional probabilities can be correctly reversed using Bayes' theorem. The logic behind this equation is that if the outcomes are restricted to B, this set serves as the new space. Note that this is a definition but not a theoretical result.
Conditional probability
95.
Continuous random variable
–
In more technical terms, the probability distribution is a description of a random phenomenon in terms of the probabilities of events. Examples of random phenomena can include the results of an experiment or survey. A probability distribution is defined in terms of an underlying sample space, the set of all possible outcomes of the random phenomenon being observed. Probability distributions are generally divided into two classes. A discrete distribution can be encoded by a discrete list of the probabilities of the outcomes, known as a function. On the other hand, a continuous probability distribution is typically described by probability density functions. The normal distribution represents a commonly encountered continuous probability distribution. More complex experiments, such as those involving stochastic processes defined in continuous time, may demand the use of more general probability measures. Important and commonly encountered univariate probability distributions include the binomial distribution, the hypergeometric distribution, the normal distribution. The multivariate normal distribution is a commonly encountered multivariate distribution. To define probability distributions for the simplest cases, one needs to distinguish between discrete and continuous random variables. Continuous probability distributions can be described in several ways. The cumulative function is the antiderivative of the function provided that the latter function exists. As probability theory is used in quite diverse applications, terminology is not uniform and sometimes confusing. The following terms are used for non-cumulative probability distribution functions: Probability mass, Probability mass function, p.m.f.: for discrete random variables.
Continuous random variable
–
The probability mass function (pmf) p (S) specifies the probability distribution for the sum S of counts from two dice. For example, the figure shows that p (11) = 1/18. The pmf allows the computation of probabilities of events such as P (S > 9) = 1/12 + 1/18 + 1/36 = 1/6, and all other probabilities in the distribution.
96.
Inverse probability
–
In probability theory, inverse probability is an obsolete term for the probability distribution of an unobserved variable. The development of the terminology from "inverse probability" to "Bayesian probability" is described by Fienberg. , Later Jeffreys uses the term in his defense of the methods of Bayes and Laplace, in 1939. The term "Bayesian", which displaced "inverse probability", was introduced by Ronald Fisher around 1950. Following the development of frequentism, Bayesian developed to contrast these approaches, became common in the 1950s. The distribution p itself is called the direct probability. The inverse problem was the problem of estimating a parameter from experimental data in the experimental sciences, especially astronomy and biology. A simple example would be the problem of estimating the position of a star in the sky for purposes of navigation. Given the data, one must estimate the true position. This problem would now be considered one of inferential statistics. Bayesian probability Bayes' theorem
Inverse probability
–
Ronald Fisher
97.
Randomness
–
Randomness is the lack of pattern or predictability in events. A random sequence of steps has no order and does not follow an intelligible pattern or combination. Individual random events are by definition unpredictable, but in many cases the frequency of different outcomes over a large number of events is predictable. For example, when throwing two dice, a sum of 7 will occur twice often as 4. In this view, randomness is a measure of uncertainty of an outcome, rather than applies to concepts of chance, probability, entropy. The fields of statistics use formal definitions of randomness. In statistics, a random variable is an assignment of a numerical value to each possible outcome of an event space. This association facilitates the identification and the calculation of probabilities of the events. Random variables can appear in random sequences. A random process is a sequence of random variables whose outcomes do not follow a deterministic pattern, but follow an evolution described by probability distributions. Other constructs are extremely useful in the various applications of randomness. Randomness is most often used in statistics to signify well-defined statistical properties. Monte Carlo methods, which rely on random input, are important techniques in science, as, for instance, in computational science. By analogy, quasi-Monte Carlo methods use quasirandom number generators. With a bowl containing 90 blue marbles, a random selection mechanism would choose a red marble with probability 1/10.
Randomness
–
Ancient fresco of dice players in Pompei.
Randomness
–
A pseudorandomly generated bitmap.
Randomness
–
The ball in a roulette can be used as a source of apparent randomness, because its behavior is very sensitive to the initial conditions.
98.
Newtonian mechanics
–
In physics, classical mechanics is one of the two major sub-fields of mechanics, along with quantum mechanics. Classical mechanics is concerned with the set of physical laws describing the motion of bodies under the influence of a system of forces. The study of the motion of bodies is an ancient one, making classical mechanics one of the largest subjects in science, engineering and technology. It is also widely known as Newtonian mechanics. Classical mechanics describes the motion of macroscopic objects, from projectiles to parts of machinery, well as astronomical objects, such as spacecraft, planets, stars, galaxies. Within classical mechanics are fields of study that describe the behavior of solids, liquids and gases and other specific sub-topics. When classical mechanics can not apply, such as at the quantum level with high speeds, quantum field theory becomes applicable. Since these aspects of physics were developed long before the emergence of quantum relativity, some sources exclude Einstein's theory of relativity from this category. However, a number of modern sources do include relativistic mechanics, which in their view represents classical mechanics in its most accurate form. Later, more general methods were developed, leading to reformulations of classical mechanics known as Lagrangian mechanics and Hamiltonian mechanics. They extend substantially beyond Newton's work, particularly through their use of analytical mechanics. The following introduces the basic concepts of classical mechanics. For simplicity, it often models real-world objects as point particles. The motion of a particle is characterized by a small number of parameters: its position, mass, the forces applied to it. Each of these parameters is discussed in turn.
Newtonian mechanics
–
Sir Isaac Newton (1643–1727), an influential figure in the history of physics and whose three laws of motion form the basis of classical mechanics
Newtonian mechanics
–
Diagram of orbital motion of a satellite around the earth, showing perpendicular velocity and acceleration (force) vectors.
Newtonian mechanics
–
Hamilton 's greatest contribution is perhaps the reformulation of Newtonian mechanics, now called Hamiltonian mechanics.
99.
Chaos theory
–
Small differences in initial conditions yield widely diverging outcomes for dynamical systems, rendering long-term prediction of their behavior impossible in general. This happens even though these systems are deterministic, meaning that their future behavior is fully determined with no random elements involved. In other words, the deterministic nature of these systems does not make them predictable. This behavior is known as deterministic chaos, or simply chaos. The approximate present does not approximately determine the future. Chaotic behavior exists in natural systems, such as weather and climate. It also occurs spontaneously in some systems such as road traffic. This behavior can be studied through analytical techniques such as recurrence plots and Poincaré maps. Chaos theory has applications including meteorology, sociology, physics, environmental science, computer science, engineering, economics, biology, ecology, philosophy. Chaos theory concerns deterministic systems whose behavior can in principle be predicted. Chaotic systems are predictable for a while and then'appear' to become random. Some examples of Lyapunov times are: about 1 millisecond; weather systems, a few days; the solar system, 50 million years. In chaotic systems, the uncertainty in a forecast increases exponentially with elapsed time. Mathematically, doubling the forecast time more than squares the proportional uncertainty in the forecast. This means, in practice, a meaningful prediction can not be made over three times the Lyapunov time.
Chaos theory
–
The Lorenz attractor displays chaotic behavior. These two plots demonstrate sensitive dependence on initial conditions within the region of phase space occupied by the attractor.
Chaos theory
–
A plot of Lorenz attractor for values r = 28, σ = 10, b = 8/3
Chaos theory
–
Turbulence in the tip vortex from an airplane wing. Studies of the critical point beyond which a system creates turbulence were important for chaos theory, analyzed for example by the Soviet physicist Lev Landau, who developed the Landau-Hopf theory of turbulence. David Ruelle and Floris Takens later predicted, against Landau, that fluid turbulence could develop through a strange attractor, a main concept of chaos theory.
Chaos theory
–
A conus textile shell, similar in appearance to Rule 30, a cellular automaton with chaotic behaviour.
100.
Roulette
–
Roulette is a casino game named after the French word meaning little wheel. A century earlier, Blaise Pascal introduced a primitive form of roulette in the 17th century in his search for a perpetual motion machine. The game has been played in Paris. The description included the house pockets, "There are exactly two slots reserved for the bank, whence it derives its sole mathematical advantage." It then goes on to describe the layout with, "...two betting spaces containing the bank's two numbers, zero and double zero". The book was published in 1801. The roulette wheels used in the late 1790s black for the double zero. To avoid confusion, the green was selected starting in the 1800s. The Eagle slot, a symbol of American liberty, was a house slot that brought the casino extra edge. Soon, the tradition vanished and since then the wheel features only numbered slots. Existing wheels with Eagle symbols are exceedingly rare, with fewer than a half-dozen copies known to exist. Authentic Eagled wheels in excellent condition can fetch tens of thousands of dollars at auction. In the 19th century, roulette spread all over the US, becoming one of the most popular casino games. A legend says that François Blanc supposedly bargained with the devil to obtain the secrets of roulette. In the United States, the double wheel made its way up the Mississippi from New Orleans, then westward.
Roulette
–
Roulette ball
Roulette
–
French roulette
Roulette
–
"Gwendolen at the roulette table" - 1910 illustration to George Eliot ' " Daniel Deronda ".
Roulette
–
18th century E.O. wheel with gamblers
101.
Kinetic theory of gases
–
Kinetic theory explains macroscopic properties of gases, such as pressure, temperature, viscosity, thermal conductivity, volume, by considering their molecular composition and motion. The theory posits that pressure is due to the impacts, on the walls of atoms moving at different velocities. Kinetic theory defines temperature in its own way, not identical with the thermodynamic definition. Known as Brownian motion, it results directly from collisions between the grains or particles and liquid molecules. The theory for ideal gases makes the following assumptions: The gas consists of very small particles known as molecules. This is equivalent to stating that the average distance separating the gas particles is large compared to their size. These particles have the same mass. The number of molecules is so large that statistical treatment can be applied. These molecules are in constant, random, rapid motion. The rapidly moving particles constantly collide among themselves and with the walls of the container. All these collisions are perfectly elastic. This means, the molecules are considered to be perfectly spherical in shape, elastic in nature. Except during collisions, the interactions among molecules are negligible. This implies: 1. Relativistic effects are negligible.
Kinetic theory of gases
–
Hydrodynamica front cover
Kinetic theory of gases
–
The temperature of an ideal monatomic gas is proportional to the average kinetic energy of its atoms. The size of helium atoms relative to their spacing is shown to scale under 1950 atmospheres of pressure. The atoms have a certain, average speed, slowed down here two trillion fold from room temperature.
102.
Avogadro constant
–
Thus, it is the proportionality factor that relates the molar mass of a compound to the mass of a sample. Avogadro's constant, often designated with L, has the value 7023602214085700000 − 1 in the International System of Units. This number is also known as Loschmidt constant in German literature. For instance, to a first approximation, 1 gram of hydrogen element, having the atomic number 1, has 7023602200000000000♠6.022×1023 hydrogen atoms. Similarly, 12 grams of 12C, with the mass number 12, has the same number of carbon atoms, 7023602200000000000♠6.022×1023. Avogadro's number is a dimensionless quantity, has the same numerical value of the Avogadro constant given in base units. In contrast, the Avogadro constant has the dimension of reciprocal amount of substance. Revisions in the base set of SI units necessitated redefinitions of the concepts of chemical quantity. Avogadro's number, its definition, was deprecated in favor of the Avogadro constant and its definition. The French physicist Jean Perrin in 1909 proposed naming the constant in honor of Avogadro. Perrin won the 1926 Nobel Prize in Physics, largely for his work in determining the Avogadro constant by several different methods. Accurate determinations of Avogadro's number require the measurement of a single quantity on both the atomic and macroscopic scales using the same unit of measurement. This became possible for the first time when American physicist Robert Millikan measured the charge on an electron in 1910. By dividing the charge on a mole of electrons by the charge on a single electron the value of Avogadro's number is obtained. Since 1910, newer calculations have more accurately determined the values for the Faraday constant and the elementary charge.
Avogadro constant
–
Amedeo Avogadro
Avogadro constant
–
Achim Leistner at the Australian Centre for Precision Optics (ACPO) holding a one-kilogram single-crystal silicon sphere for the International Avogadro Coordination.
103.
Quantum mechanics
–
Quantum mechanics, including quantum field theory, is a fundamental branch of physics concerned with processes involving, for example, atoms and photons. Systems such as these which obey quantum mechanics can be in a quantum superposition of different states, unlike in classical physics. Early quantum theory was profoundly reconceived in the mid-1920s. The reconceived theory is formulated in various specially developed mathematical formalisms. In one of them, the wave function, provides information about the probability amplitude of position, momentum, other physical properties of a particle. This experiment played a major role in the general acceptance of the theory of light. In 1838, Michael Faraday discovered cathode rays. Planck's hypothesis that energy is absorbed in discrete "quanta" precisely matched the observed patterns of black-body radiation. In 1896, Wilhelm Wien empirically determined a law of black-body radiation, known as Wien's law in his honor. Ludwig Boltzmann independently arrived by considerations of Maxwell's equations. However, it underestimated the radiance at low frequencies. Following Max Planck's solution to the black-body radiation problem, Albert Einstein offered a quantum-based theory to explain the photoelectric effect. Albert Einstein developed a theory for it. In 1913, Peter Debye extended Niels Bohr's theory of atomic structure, introducing elliptical orbits, a concept also introduced by Arnold Sommerfeld. This phase is known as old theory.
Quantum mechanics
–
Max Planck is considered the father of the quantum theory.
Quantum mechanics
–
Solution to Schrödinger's equation for the hydrogen atom at different energy levels. The brighter areas represent a higher probability of finding an electron
Quantum mechanics
–
The 1927 Solvay Conference in Brussels.
104.
Wave function
–
A wave function in quantum mechanics is a description of the quantum state of a system. The probabilities for the possible results of measurements made on the system can be derived from it. The most common symbols for a function are the Greek letters ψ or Ψ. The function is a function of the degrees of freedom corresponding to some maximal set of commuting observables. Once such a representation is chosen, the function can be derived from the quantum state. The wave function for such particles includes spin as an intrinsic, discrete degree of freedom. Discrete variables can also be included, such as isospin. These values are often displayed in a matrix. The Schrödinger equation determines how wave functions evolve over time. This gives rise to wave -- particle duality. Since the function is complex valued, only its relative phase and relative magnitude can be measured. The equations represent wave -- duality for both massless and massive particles. In the 1930s, quantum mechanics was developed using calculus and linear algebra. Those who used the techniques of calculus included Louis de Broglie, others, developing "wave mechanics". Those who applied the methods of linear algebra included Werner Heisenberg, others, developing "matrix mechanics".
Wave function
–
The electron probability density for the first few hydrogen atom electron orbitals shown as cross-sections. These orbitals form an orthonormal basis for the wave function of the electron. Different orbitals are depicted with different scale.
105.
Albert Einstein
–
Albert Einstein was a German-born theoretical physicist. Einstein developed the general theory of one of the two pillars of modern physics. Einstein's work is also known on the philosophy of science. Einstein is best known in popular culture for his mass -- energy equivalence E = mc2. This led him to develop his special theory of relativity. Einstein continued to deal with problems of statistical mechanics and theory, which led to his explanations of particle theory and the motion of molecules. Einstein also investigated the thermal properties of light which laid the foundation of the theory of light. In 1917, he applied the general theory of relativity to model the large-scale structure of the universe. Einstein settled in the U.S. becoming an American citizen in 1940. This eventually led to what would become the Manhattan Project. He largely denounced the idea of using the newly discovered nuclear fission as a weapon. Later, with the British philosopher Bertrand Russell, he signed the Russell -- Einstein Manifesto, which highlighted the danger of nuclear weapons. He was affiliated with the Institute until his death in 1955. He published more than 300 scientific papers along over 150 non-scientific works. On 5 universities and archives announced the release of Einstein's papers, comprising more than 30,000 unique documents.
Albert Einstein
–
Albert Einstein in 1921
Albert Einstein
–
Einstein at the age of 3 in 1882
Albert Einstein
–
Albert Einstein in 1893 (age 14)
Albert Einstein
–
Einstein's matriculation certificate at the age of 17, showing his final grades from the Argovian cantonal school (Aargauische Kantonsschule, on a scale of 1–6, with 6 being the highest possible mark)
106.
Max Born
–
Max Born was a German physicist and mathematician, instrumental in the development of quantum mechanics. He also supervised the work of a number of notable physicists in the 1920s and 1930s. Born won the 1954 Nobel Prize in the statistical interpretation of the wave function". He wrote his Ph.D. thesis on the subject of "Stability of Elastica in a Plane and Space", winning the University's Philosophy Faculty Prize. In 1905, he subsequently wrote his habilitation thesis on the Thomson model of the atom. In 1921, Born returned to Göttingen, arranging another chair for colleague James Franck. Under Born, Göttingen became one of the world's foremost centres for physics. In 1925, Werner Heisenberg formulated the matrix mechanics representation of quantum mechanics. His influence extended far beyond his own research. In January 1933, Born, Jewish, was suspended. Max Born became a British subject on 31 August 1939, one day before World War II broke out in Europe. He remained until 1952. He died in a hospital in Göttingen on 5 January 1970. She died when Max was four years old, on 29 August 1886. Max had a half-brother, Wolfgang, from his father's second marriage, to Bertha Lipstein.
Max Born
–
Max Born (1882–1970)
Max Born
–
Solvay Conference, 1927. Born is second from the right in the second row, between Louis de Broglie and Niels Bohr.
Max Born
–
Born's gravestone in Göttingen is inscribed with the uncertainty principle, which he put on rigid mathematical footing.
107.
Statistical
–
Statistics is the study of the collection, analysis, interpretation, presentation, organization of data. Populations can be diverse topics such as "all people living in a country" or "every atom composing a crystal". Statistics deals with all aspects of data including the planning of collection in terms of the design of surveys and experiments. Statistician Sir Arthur Lyon Bowley defines statistics as "Numerical statements of facts in any department of inquiry placed to each other". When census data cannot be collected, statisticians collect data by developing specific experiment designs and survey samples. Representative sampling assures that conclusions can safely extend from the sample to the population as a whole. In contrast, an observational study does not involve experimental manipulation. Inferences on mathematical statistics are made under the framework of theory, which deals with the analysis of random phenomena. Working from a null hypothesis, two basic forms of error are recognized: Type errors and Type II errors. Multiple problems have come to be associated with this framework: ranging from obtaining a sufficient size to specifying an adequate null hypothesis. Measurement processes that generate statistical data are also subject to error. Other types of errors can also be important. Specific techniques have been developed to address these problems. Statistics continues to be an area of active research, for example on the problem of how to analyze Big data. Statistics is a mathematical body of science that pertains as a branch of mathematics.
Statistical
–
Scatter plots are used in descriptive statistics to show the observed relationships between different variables.
Statistical
–
More probability density is found as one gets closer to the expected (mean) value in a normal distribution. Statistics used in standardized testing assessment are shown. The scales include standard deviations, cumulative percentages, percentile equivalents, Z-scores, T-scores, standard nines, and percentages in standard nines.
Statistical
–
Gerolamo Cardano, the earliest pioneer on the mathematics of probability.
Statistical
–
Karl Pearson, a founder of mathematical statistics.
108.
Reality
–
Reality is the state of things as they actually exist, rather than as they may appear or might be imagined. Reality includes everything, whether or not it is comprehensible. A still broader definition will exist. Reality is often contrasted with what is imaginary, delusional, in the mind, dreams, what is false, what is fictional, or what is abstract. At the same time, what is abstract plays a role both in everyday life and in academic research. For instance, causality, virtue, life, distributive justice are abstract concepts that can be difficult to define, but they are only rarely equated with pure delusions. This disagreement is the basis of the philosophical problem of universals. The truth refers to what is real, while falsity refers to what is not. Fictions are considered not real. A colloquial usage would have reality mean "perceptions, attitudes toward reality," as in "My reality is not your reality." For example, in a religious discussion between friends, one might say, "You might disagree, but in my reality, everyone goes to heaven." It is what a view ultimately map. Certain ideas from physics, philosophy, other fields shape various theories of reality. One such belief is that there literally is no reality beyond the beliefs we each have about reality. Many of the concepts of science and philosophy are often defined culturally and socially.
Reality
–
Reality-Virtuality Continuum.
109.
Quantum decoherence
–
Quantum decoherence is the loss of quantum coherence. In quantum mechanics, particles such as electrons are described by a wavefunction. These waves can interfere, leading to the peculiar behaviour of quantum particles. Long as there exists a definite phase relation between different states, the system is said to be coherent. This coherence is necessary for the function of quantum computers. However, when a system is not perfectly isolated, but in contact with its surrounding, the coherence decays with time, a process called quantum decoherence. As a result of this process, the behaviour is lost. Decoherence has been a subject of active research since the 1980s. Viewed in isolation, the system's dynamics are non-unitary. Thus the dynamics of the system alone are irreversible. As with any coupling, entanglements are generated between the environment. These have the effect of sharing information with -- or transferring it to -- the surroundings. Decoherence does not generate actual wave collapse. It only provides an explanation for the observation of wave function collapse, into the environment. That is, components of the wavefunction acquire phases from their immediate surroundings.
Quantum decoherence
110.
Class membership probabilities
–
Probabilistic classifiers provide classification with a degree of certainty, when combining classifiers into ensembles. Binary probabilistic classifiers are also called binomial regression models in statistics. In econometrics, probabilistic classification in general is called discrete choice. Some classification models, such as naive Bayes, multilayer perceptrons are naturally probabilistic. Methods exist to turn them into probabilistic classifiers. Some models, such as logistic regression, are conditionally trained: they optimize the conditional Pr directly on a training set. Some that are, notably naive Bayes classifiers, decision trees and boosting methods, produce distorted class probability distributions. For classification models that produce some kind of "score" on their outputs, there are several methods that turn these scores into properly calibrated class membership probabilities. For the binary case, a common approach is to apply Platt scaling, which learns a logistic model on the scores. An alternative method using isotonic regression is generally superior to Platt's method when sufficient training data is available. Commonly used loss functions for probabilistic classification include the mean squared error between the predicted and the true probability distributions. The former of these is commonly used to train logistic models.
Class membership probabilities
–
Machine learning and data mining
111.
Heuristics in judgment and decision-making
–
In psychology, heuristics are simple, efficient rules which people often use to form judgments and make decisions. They are mental shortcuts that usually involve focusing on one aspect of a complex problem and ignoring others. They can lead from logic, rational choice theory. The resulting errors are called "cognitive biases" and many different types have been documented. These have been shown to affect people's choices like making an investment decision. Heuristics usually govern automatic, intuitive judgments but can also be used as deliberate mental strategies when working from limited information. Cognitive scientist Herbert A. Simon originally proposed that human judgments are based on heuristics, taking the concept from the field of computation. In the early 1970s, psychologists Amos Tversky and Daniel Kahneman demonstrated three heuristics that underlie a wide range of intuitive judgments. This research provided a theory of processing to explain how people make choices. This heuristics-and-biases tradition has been criticised by Gerd Gigerenzer and others for being too focused on how heuristics lead to errors. The critics argue that heuristics can be seen as rational in an underlying sense. According to this perspective, heuristics are good enough for most purposes without being too demanding on the brain's resources. In their initial research, Tversky and Kahneman proposed three heuristics—availability, representativeness, anchoring and adjustment. Subsequent work has identified many more. Heuristics that underlie judgment are called "judgment heuristics".
Heuristics in judgment and decision-making
–
The amount of money people will pay in an auction for a bottle of wine can be influenced by considering an arbitrary two-digit number.
Heuristics in judgment and decision-making
–
A visual example of attribute substitution. This illusion works because the 2D size of parts of the scene is judged on the basis of 3D (perspective) size, which is rapidly calculated by the visual system.
112.
Probability density function
–
The probability density function is nonnegative everywhere, its integral over the entire space is equal to one. The terms "function" and "function" have sometimes been used to denote the probability density function. However, this use is not standard among probabilists and statisticians. Further confusion of terminology exists because density function has also been used for what is here called the "probability mass function". In general though, the PMF is used in the context of discrete random variables, while PDF is used in the context of continuous random variables. Suppose a species of bacteria typically lives 4 to 6 hours. What is the probability that a bacterium lives exactly 5 hours? The answer is 0%. A lot of bacteria live for approximately 5 hours, but there is no chance that any given bacterium dies at exactly 5.0000000000... hours. Instead we might ask: What is the probability that the bacterium dies between 5 hours and 5.01 hours? Let's say the answer is 0.02. Next: What is the probability that the bacterium dies between 5 hours and 5.001 hours? The answer is probably around 0.002, since this is 1/10th of the previous interval. The probability that the bacterium dies between 5 hours and 5.0001 hours is probably about 0.0002, so on. In these three examples, the ratio / is approximately constant, equal to 2 per hour.
Probability density function
–
Boxplot and probability density function of a normal distribution N (0, σ 2).
113.
Webster's Dictionary
–
Noah Webster, the author of the readers and spelling books that dominated the American market at the time, spent decades of research in compiling his dictionaries. His first dictionary, A Compendious Dictionary of the English Language, appeared in 1806. Webster was a proponent of English reform for reasons both nationalistic. In A Companion to the American Revolution, John Algeo notes: "it is often assumed that characteristically American spellings were invented by Noah Webster. He was very influential in popularizing certain spellings in America, but he did not originate them. Rather he chose already existing options on such grounds as simplicity, etymology". For example, spellings such as color are the most common. He spent the next two decades working to expand his dictionary. There were 2,500 copies printed, at $20 for the two volumes. At first the set sold poorly. When he lowered the price to $15, its sales improved, by 1836 that edition was exhausted. Not all copies were bound at the same time; the book also appeared in publisher's boards; other original bindings of a later date are not unknown. In 1841, 82-year-old Noah Webster published a second edition of his lexicographical masterpiece with the help of his son, William G. Webster. B. L. Hamlen of New Haven, Connecticut, prepared the 1841 printing of the second edition. However, a $15 price tag on the book made it too expensive to sell easily, so the Amherst firm decided to sell out.
Webster's Dictionary
–
Noah Webster
Webster's Dictionary
–
Extract from the Orthography section of the first edition, which popularized the American standard spellings of -er (6); -or (7); dropped -e (8); -or (10); -se (11); doubling consonants with suffix (15)
Webster's Dictionary
–
President Theodore Roosevelt was criticized for supporting the simplified spelling campaign of Andrew Carnegie in 1906
Webster's Dictionary
–
Merriam-Webster’s eleventh edition of the Collegiate Dictionary
114.
Sample space
–
In probability theory, the sample space of an experiment or random trial is the set of all possible outcomes or results of that experiment. The possible outcomes are listed as elements in the set. It is common to refer to a sample space by the labels S, Ω, or U. For example, if the experiment is tossing a coin, the sample space is typically the set. For tossing two coins, the corresponding space would be. For tossing a six-sided die, the typical sample space is. For many experiments, there may be more than one plausible space available, depending on what result is of interest to the experimenter. Still other sample spaces are possible, such as if some cards have been flipped when shuffling. Some treatments of probability assume that the various outcomes of an experiment are always defined as to be equally likely. The result of this is that every possible combination of individuals who could be chosen for the sample is also equally likely. In an elementary approach to probability, any subset of the space is usually called an event. However, this gives rise to problems that a more precise definition of an event is necessary. Under this definition only measurable subsets of the space, constituting a σ-algebra over the sample space itself, are considered events. Probability space Space Set Event σ-algebra
Sample space
–
Flipping a coin leads to a sample space composed of two outcomes that are almost equally likely.
Sample space
–
Up or down? Flipping a brass tack leads to a sample space composed of two outcomes that are not equally likely.
115.
International Standard Book Number
–
The International Standard Book Number is a unique numeric commercial book identifier. An ISBN is assigned to each variation of a book. For example, an e-book, a hardcover edition of the same book would each have a different ISBN. The ISBN is 13 digits long if assigned after 1 January 2007, 10 digits long if assigned before 2007. The method of assigning an ISBN varies from country to country, often depending on how large the publishing industry is within a country. The initial ISBN configuration of recognition was generated based upon the 9-digit Standard Book Numbering created in 1966. The 10-digit ISBN format was published in 1970 as international standard ISO 2108. The International Standard Serial Number, identifies periodical publications such as magazines; and the International Standard Music Number covers for musical scores. The ISBN configuration of recognition was generated in 1967 in the United Kingdom by Emery Koltay. The 10-digit ISBN format was published as international standard ISO 2108. The United Kingdom continued to use the 9-digit SBN code until 1974. The ISO on-line facility only refers back to 1978. An SBN may be converted by prefixing the digit "0". This can be converted to ISBN 0-340-01381-8; the digit does not need to be re-calculated. Since 1 ISBNs have contained 13 digits, a format, compatible with "Bookland" European Article Number EAN-13s.
International Standard Book Number
–
A 13-digit ISBN, 978-3-16-148410-0, as represented by an EAN-13 bar code
116.
Journal of the American Statistical Association
–
It is published four times a year. It had an impact factor in 2010, tenth highest in the "Statistics and Probability" category of Journal Citation Reports. The predecessor of this journal started with the name Publications of the American Statistical Association. It became JASA in 1922. Journal of the American Statistical Association
Journal of the American Statistical Association
–
Journal of the American Statistical Association
117.
ArXiv
–
In many fields of physics, almost all scientific papers are self-archived on the arXiv repository. Begun on August 1991, arXiv.org passed the half-million article milestone on October 3, 2008, hit a million by the end of 2014. By 2014 the rate had grown to more than 8,000 per month. The arXiv was made possible by the low-bandwidth TeX format, which allowed scientific papers to be easily transmitted over the Internet and rendered client-side. The number of papers being sent soon filled mailboxes to capacity. Additional modes of access were soon added the World Wide Web in 1993. The e-print was quickly adopted to describe the articles. Its original name was xxx.lanl.gov. It is now hosted principally with 8 mirrors around the world. Its existence was one of the precipitating factors that led to the current movement in scientific publishing known as open access. Scientists regularly upload their papers to arXiv.org for worldwide access and sometimes for reviews before they are published in peer-reviewed journals. Ginsparg was awarded a MacArthur Fellowship in 2002 for his establishment of arXiv. Annual donations were envisaged to vary in size between $2,300 to $4,000, based on each institution’s usage. In September 2011, Cornell University Library took overall financial responsibility for arXiv's operation and development. Ginsparg was quoted in the Chronicle of Higher Education as saying it "was supposed to be a three-hour tour, not a sentence".
ArXiv
–
arXiv
ArXiv
–
A screenshot of the arXiv taken in 1994, using the browser NCSA Mosaic. At the time, HTML forms were a new technology.
118.
Cambridge University Press
–
Cambridge University Press is the publishing business of the University of Cambridge. Granted letters patent in 1534, Cambridge University is the world's oldest publishing house and the second-largest university press in the world. Cambridge University also holds letters patent as the Queen's Printer. Cambridge University Press is both an academic and educational publisher. With a global sales presence, offices in more than 40 countries, Cambridge University publishes over 50,000 titles by authors from over 100 countries. Its publishing includes academic journals, monographs, reference works, English-language teaching and learning publications. Cambridge University Press is a charitable enterprise that transfers part of its annual surplus back to the university. Cambridge University Press is both the oldest house in the world and the oldest university press. Cambridge is one of the two privileged presses. Authors published by Cambridge have included John Milton, William Harvey, Isaac Newton, Stephen Hawking. In 1591, John Legate, printed the first Cambridge Bible, an octavo edition of the popular Geneva Bible. The London Stationers objected strenuously, claiming that they had the monopoly on Bible printing. The university's response was to point out the provision in its charter to print'all manner of books'. Cambridge University was in 1698, that a body of senior scholars was appointed to be responsible to the university for the Press's affairs. Its role still includes the review and approval of the Press's planned output.
Cambridge University Press
–
The University Printing House, on the main site of the Press
Cambridge University Press
–
The letters patent of Cambridge University Press by Henry VIII allow the Press to print "all manner of books". The fine initial with the king's portrait inside it and the large first line of script are still discernible.
Cambridge University Press
–
The Pitt Building in Cambridge, which used to be the headquarters of Cambridge University Press, and now serves as a conference centre for the Press.
Cambridge University Press
–
On the main site of the Press
119.
BBC
–
The British Broadcasting Corporation is a British public service broadcaster. The BBC operates under its Agreement with the Secretary of State for Culture, Media and Sport. Britain's first public broadcast from the Marconi factory in Chelmsford took place in June 1920. It was featured the famous Australian Soprano Dame Nellie Melba. The broadcast caught the people's imagination and marked a turning point in the British public's attitude to radio. However, this public enthusiasm was not shared in official circles where such broadcasts were held to interfere with important civil communications. A Scottish Calvinist, was appointed its General Manager in December 1922 a few weeks after the company made its first official broadcast. The company was to be financed by a royalty on the sale of BBC wireless receiving sets from approved manufacturers. To this day, the BBC aims to follow the Reithian directive to "inform, entertain". The financial arrangements soon proved inadequate. Set sales were disappointing as amateurs made listeners bought rival unlicensed sets. By mid-1923, the Postmaster-General commissioned a review of broadcasting by the Sykes Committee. This was to be followed by a simple 10 shillings licence fee with no royalty once the wireless manufactures protection expired. The BBC's broadcasting monopoly was made explicit for the duration of its current licence, as was the prohibition on advertising. The BBC was also required to source all news from external wire services.
BBC
–
BBC Television Centre at White City, West London, which opened in 1960 and closed in 2013
BBC
–
BBC Pacific Quay in Glasgow, which was opened in 2007
BBC
–
BBC New Broadcasting House, London which came into use during 2012–13.
BBC
–
The headquarters of the BBC at Broadcasting House in Portland Place, London, England. This section of the building is called 'Old Broadcasting House'.
120.
Edwin Thompson Jaynes
–
Edwin Thompson Jaynes was the Wayman Crow Distinguished Professor of Physics at Washington University in St. Louis. Jaynes strongly promoted the interpretation of theory as an extension of logic. Together with Fred Cummings, he modeled the evolution of a two-level atom in an electromagnetic field, in a fully quantized way. This model is known as the Jaynes–Cummings model. Other contributions include the projection fallacy. This book was published posthumously in 2003. An unofficial list of errata is hosted by Kevin S. Van Horn. Edwin Thompson Jaynes at the Mathematics Genealogy Project Edwin Thompson Jaynes. Probability Theory: The Logic of Science. Cambridge University Press. ISBN 0-521-59271-2. Early version of Probability Theory: The Logic of Science. Book longer downloadable for copyright reasons. A comprehensive web page on E. T. Jaynes's life and work. ET Jaynes' obituary at Washington university http://bayes.wustl.edu/etj/articles/entropy.concentration.pdf Jaynes' analysis of Rudolph Wolf's dice data
Edwin Thompson Jaynes
–
Edwin Thompson Jaynes (1922–1998), photo taken circa 1960.
121.
An Anthology of Chance Operations
–
It was edited by La Monte Young and DIY co-published in 1963 in New York City. The project became the manifestation of the original impetus for establishing Fluxus. The materials that Young had collected were never published until An Anthology of Chance Operations. It was the collaborative publication project between people who were to become part of Fluxus: Young, Mac Low and Maciunas. Heiner Friedrich issued a second edition in 1970.
An Anthology of Chance Operations
–
Book cover.
122.
GNU Free Documentation License
–
The GNU Free Documentation License is a copyleft license for free documentation, designed by the Free Software Foundation for the GNU Project. , if produced in larger quantities, the original code must be made available to the work's recipient. The GFDL was designed for documentation which often accompanies GNU software. However, it can be used for any text-based work, regardless of subject matter. For example, the free online encyclopedia Wikipedia uses the GFDL for all of its text. The GFDL was released in draft form for feedback in September 1999. After revisions, version 1.1 was issued in March 2000, version 1.2 in November 2002, version 1.3 in November 2008. The current state of the license is version 1.3. Material licensed under the current version of the license can be used for any purpose, as long as the use meets certain conditions. All previous authors of the work must be attributed. All changes to the work must be logged. All derivative works must be licensed under the same license. Technical measures such as DRM may not be used to control or obstruct distribution or editing of the document. The license explicitly separates any kind of "Document" from "Secondary Sections", which may not be integrated with the Document, but exist as front-matter materials or appendices. Secondary sections can contain information regarding the author's or publisher's relationship to the subject matter, but not any subject matter itself.
GNU Free Documentation License
–
The GFDL logo
123.
Logic
–
Logic, originally meaning "the word" or "what is spoken", is generally held to consist of the systematic study of the form of arguments. A valid argument is one where there is a specific relation of logical support between the assumptions of its conclusion. Historically, recently logic has been studied in computer science, linguistics, psychology, other fields. The concept of logical form is central to logic. The validity of an argument is determined by its logical form, not by its content. Modern symbolic logic are examples of formal logic. Informal logic is the study of natural language arguments. The study of fallacies is an important branch of informal logic. Since informal argument is not strictly speaking deductive, on some conceptions of logic, informal logic is not logic at all. See'Rival conceptions', below. Formal logic is the study of inference with purely formal content. The works of Aristotle contain the earliest formal study of logic. Modern formal logic expands on Aristotle. In many definitions of logic, logical inference with purely formal content are the same. This does not render the notion of informal logic vacuous, because no formal logic captures all of the nuances of natural language.
Logic
–
Aristotle, 384–322 BCE.
124.
Outline of logic
–
One of the aims of logic is to identify the incorrect inferences. Logicians study the criteria for the evaluation of arguments. By design, fallacies may exploit emotional triggers in the listener or interlocutor, or take advantage of social relationships between people. Fallacious arguments are often structured using rhetorical patterns that obscure any logical argument. Fallacies can be used to win arguments regardless of the merits. There are dozens of types of fallacies. Formal logic – Mathematical logic, symbolic logic and formal logic are largely, if not completely synonymous. The essential feature of this field is the use of formal languages to express the ideas whose logical validity is being studied. Axiom Deductive system Formal proof Formal system Formal theorem Syntactic consequence Syntax Transformation rules Model theory – The study of interpretation of formal systems. The field has grown to include the study of generalized definability. The answers to these questions have led to a rich theory, still being actively researched. In The Dictionary of the History of Ideas. Logic test Test your logic skills Logic Self-Taught: A Workbook
Outline of logic
–
Tautology
125.
History of logic
–
The history of logic deals with the study of the development of the science of valid inference. Formal logics developed in China, India, Greece. Particularly Aristotelian logic as found in the Organon, found wide application and acceptance in Western science and mathematics for millennia. Especially Chrysippus, began the development of predicate logic. Empirical methods ruled the day, as evidenced by Sir Francis Bacon's Novum Organon of 1620. Valid reasoning has been employed in all periods of human history. However, logic studies the principles of valid reasoning, demonstration. It is probable that the idea of demonstrating a conclusion first arose with geometry, which originally meant the same as "land measurement". The ancient Egyptians discovered geometry, including the formula for the volume of a truncated pyramid. Ancient Babylon was also skilled in mathematics. While the ancient Egyptians empirically discovered some truths of geometry, the great achievement of the ancient Greeks was to replace empirical methods by demonstrative proof. Both Thales and Pythagoras of the Pre-Socratic philosophers seem aware of geometry's methods. The proofs of Euclid of Alexandria are a paradigm of Greek geometry. The proof must be formal;, the derivation of the proposition must be independent of the subject matter in question. This is part of a protracted debate about falsity.
History of logic
–
Plato's academy
History of logic
–
Aristotle's logic was still influential in the Renaissance
History of logic
–
Chrysippus of Soli
History of logic
–
A text by Avicenna, founder of Avicennian logic
126.
Logic in computer science
–
The ACM–IEEE Symposium on Logic in Computer Science is an annual academic conference on the theory and practice of computer science in relation to mathematical logic. Since 1995, the Kleene award is given to the best student paper. Since 2006, the LICS Test-of-Time Award is given annually to one among the twenty-year-old LICS papers that have best met the test of time. Each year, since 2006, the LICS Test-of-Time Award recognizes those articles from 20 years earlier, which have become influential. Kleene, is given for the best paper. The list of science conferences contains other academic conferences in computer science. LICS home page
Logic in computer science
127.
Metamathematics
–
Metamathematics is the study of mathematics itself using mathematical methods. This study produces metatheories, which are mathematical theories about other mathematical theories. Emphasis on metamathematics owes itself to David Hilbert's attempt to secure the foundations of mathematics in the early part of the 20th Century. Metamathematics provides "a mathematical technique for investigating a great variety of foundation problems for logic". An important feature of metamathematics is its emphasis on differentiating from outside a system. An informal illustration of this is categorizing the proposition "4" as belonging to mathematics while categorizing the proposition"' 2 +2 = 4' is valid" as belonging to metamathematics. Something similar can be said around the well-known Russell's paradox. Metamathematics was intimately connected to mathematical logic, so that the early histories of the two fields, during the late 20th centuries, largely overlap. Serious metamathematical reflection began with the work of Gottlob Frege, especially his Begriffsschrift. David Hilbert was the first to invoke the term "metamathematics" with regularity. In his hands, it meant something akin to contemporary theory, in which finitary methods are used to study various mathematical theorems. Today, metalogic and metamathematics are largely synonymous with each other, both have been substantially subsumed by mathematical logic in academia. The discovery of hyperbolic geometry had important philosophical consequences for Metamathematics. Before its discovery there was just one geometry and mathematics; the idea of another geometry was improbable. The "uproar of the Boeotians" gave an impetus to metamathematics and great improvements in mathematical rigour, analytical philosophy and logic.
Metamathematics
–
The title page of the shortened version of the Principia Mathematica to *56, an important work of metamathematics.
128.
Set theory
–
Set theory is a branch of mathematical logic that studies sets, which informally are collections of objects. Although any type of object can be collected into a set, set theory is applied most often to objects that are relevant to mathematics. The language of set theory can be used in the definitions of nearly all mathematical objects. The modern study of theory was initiated in the 1870s. Set theory is commonly employed as a foundational system for mathematics, particularly in the form of Zermelo–Fraenkel set theory with the axiom of choice. Beyond its foundational role, theory is a branch of mathematics in its own right, with an active community. Mathematical topics typically evolve among many researchers. Set theory, however, was founded by a single paper in 1874 by Georg Cantor: "On a Property of the Collection of All Real Algebraic Numbers". Especially notable is the work of Bernard Bolzano in the first half of the 19th century. Modern understanding of infinity began on theory. An 1872 meeting between Cantor and Richard Dedekind influenced Cantor's thinking and culminated in Cantor's 1874 paper. Cantor's work initially polarized the mathematicians of his day. While Karl Weierstrass and Dedekind supported Cantor, Leopold Kronecker, now seen as a founder of mathematical constructivism, did not. This utility of theory led to the article "Mengenlehre" contributed in 1898 to Klein's encyclopedia. In 1899 Cantor had himself posed the question "What is the cardinal number of the set of all sets?", obtained a related paradox.
Set theory
–
Georg Cantor
Set theory
–
A Venn diagram illustrating the intersection of two sets.
129.
A priori and a posteriori
–
These terms are used with respect to reasoning to distinguish "necessary conclusions from first premises" from "conclusions based on sense observation". A posteriori knowledge or justification is dependent on experience or empirical evidence, as with most aspects of science and personal knowledge. There are many points of view on these two types of knowledge, their relationship is one of the oldest problems in modern philosophy. The terms a priori and a posteriori are primarily used as adjectives to modify the noun "knowledge". However, "a priori" is sometimes used to modify other nouns, such as "truth". Philosophers also may use "apriority" and "aprioricity" as nouns to refer to the quality of being "a priori". Although definitions and use of the terms have varied in the history of philosophy, they have consistently labeled two separate epistemological notions. See also the related distinctions: deductive/inductive, analytic/synthetic, necessary/contingent. The intuitive distinction between a priori and a posteriori knowledge is best seen in examples. A priori Consider the proposition, "If George V reigned at least four days, then he reigned more than three days." This is something that one knows a priori, because it expresses a statement that one can derive by reason alone. A posteriori Compare this with the proposition expressed by the sentence, "George V reigned from 1910 to 1936." This is something that one must come to know a posteriori, because it expresses an empirical fact unknowable by reason alone. One theory, popular among the logical positivists of the early 20th century, is what Boghossian calls the "analytic explanation of the a priori." The distinction between analytic and synthetic propositions was first introduced by Kant.
A priori and a posteriori
–
Time Portal
130.
Definition
–
A definition is a statement of the meaning of a term. Definitions can be classified into two large categories, extensional definitions. Another important category of definitions is the class of ostensive definitions, which convey the meaning of a term by pointing out examples. A term may have multiple meanings, thus require multiple definitions. In mathematics, a definition is used to give a precise meaning instead of describing a pre-existing term. Axioms are the basis on which all of mathematics is constructed. In modern usage, a definition is something, typically expressed in words, that attaches a meaning to a group of words. Note that the definiens is not the meaning of the word is instead something that conveys the same meaning as that word. There are many sub-types of definitions, often specific to a given field of study. An intensional definition, also called a connotative definition, specifies the sufficient conditions for a thing being a member of a specific set. Any definition that attempts to set out the essence of something, such as that by genus and differentia, is an intensional definition. An extensional definition, also called a denotative definition, of a term specifies its extension. It is a list naming every object, a member of a specific set. An extensional definition would be the list of greed, sloth, pride, lust, envy, gluttony. A genus–differentia definition is a type of intensional definition that takes a large category and narrows it down to a smaller category by a distinguishing characteristic.
Definition
–
A definition states the meaning of a word using other words. This is sometimes challenging. Common dictionaries contain lexical, descriptive definitions, but there are various types of definition - all with different purposes and focuses.
131.
Logical consequence
–
A valid logical argument is one in which the conclusions entail from the premises, because the conclusions are consequences of the premises. All of philosophical logic is meant to provide accounts of the nature of logical consequence and the nature of logical truth. Logical consequence is necessary and formal, by way of examples that explain with formal proof and models of interpretation. The most widely prevailing view on how to best account for logical consequence is to appeal to formality. Syntactic accounts of logical consequence rely on schemes using inference rules. For instance, we can express the logical form of a valid argument as: All A are B. All C are A. Therefore, all C are B. This argument is formally valid, because every instance of arguments constructed using this scheme are valid. This is in contrast to an argument like "Fred is Mike's brother's son. Therefore Fred is Mike's nephew." If you know that Q follows logically from P no information about the possible interpretations of P or Q will affect that knowledge. Our knowledge that Q is a logical consequence of P cannot be influenced by empirical knowledge. Deductively valid arguments can be known to be so without recourse to experience, so they must be knowable a priori. However, formality alone does not guarantee that logical consequence is not influenced by empirical knowledge.
Logical consequence
–
Tautology
132.
Logical truth
–
Logical truth is one of the most fundamental concepts in logic, there are different theories on its nature. A logical truth is a statement, true, remains true under all reinterpretations of its components other than its logical constants. It is a type of analytic statement. All of philosophical logic can be thought of as providing accounts of the nature of logical truth, as well as logical consequence. Logical truths are truths which are considered to be necessarily true. However, it is not universally agreed that there are any statements which are necessarily true. A logical truth is considered by some philosophers to be a statement, true in all possible worlds. Later, with the rise of formal logic a logical truth was considered to be a statement, true under all possible interpretations. Empiricists commonly thus do not purport to describe the world. Logical truths, being analytic statements, do not contain any information about any matters of fact. Other than logical truths, there is also a second class of analytic statements, typified by "No bachelor is married." The characteristic of such a statement is that it can be turned by substituting synonyms for synonyms veritate. "No bachelor is married." Can be turned into "No unmarried man is married." By substituting'unmarried man' for its synonym'bachelor.'
Logical truth
–
Functional:
133.
Name
–
A name is a term used for identification. Names can identify a category of things, or a single thing, either uniquely, or within a given context. A personal name not necessarily uniquely, a specific individual human. The name of a specific entity is, when consisting of only one word, a proper noun. Other nouns are sometimes called "general names". Caution must be exercised for there are ways that one language may prefer one type of name over another. Also, claims to authority can be refuted: the British did not refer to Louis-Napoleon as Napoleon III during his rule. Perhaps connected to non-Indo-European terms such as Proto-Uralic * nime. In the Old Testament, a change of name indicates a change of status. Simon was renamed Peter when he was given the Keys to Heaven. Throughout the Bible, characters are given names at birth that describe the course of their lives. For example: Solomon meant peace, the king with that name was the first whose reign was without war. Likewise, Joseph named Manasseh; when Joseph also said," "God has made me forget all my troubles and everyone in my father's family." Jewish people did not have surnames which were passed from generation to generation. However, they were typically known as the child of their father.
Name
–
A cartouche indicates that the Egyptian hieroglyphs enclosed are a royal name.
134.
Necessity and sufficiency
–
In logic, necessity and sufficiency are implicational relationships between statements. That is, the two statements must be either simultaneously false. In ordinary English, ` necessary' and ` sufficient' indicate relations between states of affairs, not statements. Being a male sibling is a sufficient condition for being a brother. Fred's being a male sibling is sufficient for the truth of the statement that Fred is a brother. In the above situation, we also say that N is a necessary condition for S. Phrased differently, the antecedent S can not be true without N being true. In order for someone to be called Socrates, it is necessary for that someone to be Named. We also say that S is a sufficient condition for N. Consider the truth table again. If the conditional statement is true, then if S is true, N must be true. In common terms, "S" guarantees N". Continuing the example, knowing that someone is called Socrates is sufficient to know that that someone has a Name. A sufficient condition requires that both of the implications S ⇒ N and N ⇒ S hold. This is expressed as "S is sufficient for N", "S if and only if N", or S ⇔ N.
Necessity and sufficiency
–
The sun being above the horizon is a necessary condition for direct sunlight; but it is not a sufficient condition as something else may be casting a shadow, e.g. in the case of an eclipse.
Necessity and sufficiency
–
That a train runs on schedule can be a sufficient condition for arriving on time (if one boards the train and it departs on time, then one will arrive on time); but it is not always a necessary condition, since there are other ways to travel (if the train does not run to time, one could still arrive on time through other means of transport).
135.
Paradox
–
A paradox is a statement that, despite apparently sound reasoning from true premises, leads to a self-contradictory or a logically unacceptable conclusion. Some logical paradoxes are still valuable in promoting critical thinking. Some paradoxes have revealed errors in definitions assumed to have caused axioms of mathematics and logic to be re-examined. Others, such as Curry's paradox, are not yet resolved. Examples outside logic include the Ship of Theseus from philosophy. Paradoxes can also take the form of other media. For example, M.C. Common themes in paradoxes include self-reference, infinite regress, circular definitions, confusion between different levels of abstraction. Patrick Hughes outlines three laws of the paradox: Self-reference An example is "a form of the liar paradox. The statement is referring to itself. Another example of self-reference is the question of whether the barber shaves himself in the paradox. One more example would be "Is the answer to this question'No'?" Contradiction "This statement is false"; the statement can not be true at the same time. Another example of contradiction is if a man talking to a genie wishes that wishes couldn't come true. Vicious infinite regress "This statement is false"; if the statement is true, then the statement is false, thereby making the statement true.
Paradox
136.
Reason
–
An aspect of it, is sometimes referred to as rationality. Reasoning is associated with thinking, intellect. Reason, like intuition, is one of the ways by which thinking comes from one idea to a related idea. It is also closely identified to self-consciously change beliefs, attitudes, traditions, institutions, therefore with the capacity for freedom and self-determination. In contrast to reason as an abstract noun, a reason is a consideration which justifies some event, phenomenon, or behavior. The field of logic studies ways in which human beings reason formally through argument. The field of automated reasoning studies how reasoning may not be modeled computationally. Animal psychology considers the question of whether animals other than humans can reason. As a philosophical term logos was translated in its non-linguistic senses as ratio. This was also commonly a translation for logos in the sense of an account of money. This is the direct source of the English word "reason". Thomas Hobbes for example, also used the word ratiocination as a synonym for "reasoning". "self-correcting," and the critique of reason has been a persistent theme in philosophy. It has been defined in different ways, by different thinkers about human nature. Perhaps starting with Pythagoras or Heraclitus, the cosmos is even said to have reason.
Reason
–
Francisco de Goya, The Sleep of Reason Produces Monsters (El sueño de la razón produce monstruos), c. 1797
Reason
–
René Descartes
Reason
–
Dan Sperber believes that reasoning in groups is more effective and promotes their evolutionary fitness.
137.
Reference
–
Reference is a relation between objects in which one object designates, or acts as a means by which to connect to or link to, another object. The first object in this relation is said to refer to the second object. The one to which the first object refers, is called the referent of the first object. In some cases, methods are used that intentionally hide the reference from some observers, as in cryptography. The term adopts shades of meaning particular to the contexts in which it is used. Some of them are described in the sections below. A number of words derive from the same root, including refer, referee, referent, referendum. Its derivatives may carry the sense of "link to" or "connect to", as in the meanings of reference described in this article. Another sense is "consult"; this is reflected in such expressions as reference work, job reference, etc.. In semantics, reference is generally construed as objects that are named by them. Hence, the word "John" refers to the person John. The word "it" refers to some previously specified object. The object referred to is called the referent of the word. Sometimes the word-object relation is called "denotation"; the word denotes the object. The relation from object to word, is called "exemplification"; the object exemplifies what the word denotes.
Reference
–
The triangle of reference, from the influential book The Meaning of Meaning (1923) by C. K. Ogden and I. A. Richards.
138.
Syntax (logic)
–
In logic, syntax is anything having to do with formal languages or formal systems without regard to any interpretation or meaning given to them. Syntax is usually associated with the rules governing the composition of texts in a formal language that constitute the well-formed formulas of a formal system. In science, the syntax refers to the rules governing the composition of well-formed expressions in a programming language. As in mathematical logic, it is independent of semantics and interpretation. A symbol is an idea, abstraction or concept, tokens of which may be marks or a configuration of marks which form a particular pattern. Symbols of a formal language need not be symbols of anything. For instance there are logical constants which do not refer to any idea, but rather serve as a form of punctuation in the language. A symbol or string of symbols may comprise a well-formed formula if the formulation is consistent with the formation rules of the language. Symbols of a formal language must be capable of being specified without any reference to any interpretation of them. A formal language is a syntactic entity which consists of a set of finite strings of symbols which are its words. Which strings of symbols are words is determined usually by specifying a set of formation rules. Formation rules are a precise description of which strings of symbols are the well-formed formulas of a formal language. It is synonymous with the set of strings over the alphabet of the formal language which constitute well formed formulas. However, it does not describe their semantics. A proposition is a sentence expressing something true or false.
Syntax (logic)
–
This diagram shows the syntactic entities which may be constructed from formal languages. The symbols and strings of symbols may be broadly divided into nonsense and well-formed formulas. A formal language is identical to the set of its well-formed formulas. The set of well-formed formulas may be broadly divided into theorems and non-theorems.
139.
List of paradoxes
–
This is a list of paradoxes, grouped thematically. The grouping is approximate, as paradoxes may fit into more than one category. Of varying definitions of the term paradox, some of the following are not considered to be paradoxes by everyone. This list have their own article. Although considered paradoxes, some of these are based on fallacious reasoning, or analysis. Informally, the term is often used to describe a counter-intuitive result. Barbershop paradox: The supposition that if one of two simultaneous assumptions leads to a contradiction, the other assumption is also disproved leads to paradoxical consequences. Not to be confused with the Barber paradox. Catch-22: A situation in which someone is in need of something that can only be had by not being in need of it. Paradox: In any pub there is a customer of whom it is true to say: if that customer drinks, everybody in the pub drinks. Paradox of entailment: Inconsistent premises always make an argument valid. Raven paradox:: Observing a green apple increases the likelihood of all ravens being black. Ross' paradox: Disjunction introduction poses a problem for imperative inference by seemingly permitting arbitrary imperatives to be inferred. Unexpected hanging paradox: The day of the hanging will be a surprise, so it cannot happen at all, so it will be a surprise. Bottle Imp paradox use similar logic.
List of paradoxes
–
Abilene
List of paradoxes
–
The Monty Hall problem: which door do you choose?
140.
Integrated Authority File
–
The Integrated Authority File or GND is an international authority file for the organisation of personal names, subject headings and corporate bodies from catalogues. It is used mainly increasingly also by archives and museums. The GND is managed with various regional library networks in German-speaking Europe and other partners. The GND falls under the Creative Commons Zero license. The GND specification provides a hierarchy of high-level sub-classes, useful in library classification, an approach to unambiguous identification of single elements. It also comprises an ontology intended for knowledge representation in the semantic web, available in the RDF format.
Integrated Authority File
–
GND screenshot
141.
Probability
–
Probability is the measure of the likelihood that an event will occur. Probability is quantified as a number between 0 and 1. The higher the probability of an event, the more certain that the event will occur. A simple example is the tossing of a fair coin. Since the coin is unbiased, the two outcomes are both equally probable; the probability of "head" equals the probability of "tail." Since no other outcomes are possible, the probability is 1/2, of either "head" or "tail". This type of probability is also called a priori probability. Probability theory is also used to describe the underlying mechanics and regularities of complex systems. For example, tossing a fair coin twice will yield "head-head", "tail-tail" outcomes. The probability of getting an outcome of "head-head" is 1 out of 0.25. This interpretation considers probability to be the relative frequency "in the long run" of outcomes. Subjectivists assign numbers per subjective probability, i.e. as a degree of belief. The most popular version of subjective probability is Bayesian probability, which includes knowledge well as experimental data to produce probabilities. The knowledge is represented by some prior distribution. These data are incorporated in a likelihood function.
Probability
–
Christiaan Huygens probably published the first book on probability
Probability
–
Gerolamo Cardano
Probability
–
Carl Friedrich Gauss