1.
Probability theory
–
Probability theory is the branch of mathematics concerned with probability, the analysis of random phenomena. It is not possible to predict precisely results of random events, two representative mathematical results describing such patterns are the law of large numbers and the central limit theorem. As a mathematical foundation for statistics, probability theory is essential to human activities that involve quantitative analysis of large sets of data. Methods of probability theory also apply to descriptions of complex systems given only partial knowledge of their state, a great discovery of twentieth century physics was the probabilistic nature of physical phenomena at atomic scales, described in quantum mechanics. Christiaan Huygens published a book on the subject in 1657 and in the 19th century, initially, probability theory mainly considered discrete events, and its methods were mainly combinatorial. Eventually, analytical considerations compelled the incorporation of continuous variables into the theory and this culminated in modern probability theory, on foundations laid by Andrey Nikolaevich Kolmogorov. Kolmogorov combined the notion of space, introduced by Richard von Mises. This became the mostly undisputed axiomatic basis for modern probability theory, most introductions to probability theory treat discrete probability distributions and continuous probability distributions separately. The more mathematically advanced measure theory-based treatment of probability covers the discrete, continuous, consider an experiment that can produce a number of outcomes. The set of all outcomes is called the space of the experiment. The power set of the space is formed by considering all different collections of possible results. For example, rolling an honest die produces one of six possible results, one collection of possible results corresponds to getting an odd number. Thus, the subset is an element of the set of the sample space of die rolls. In this case, is the event that the die falls on some odd number, If the results that actually occur fall in a given event, that event is said to have occurred. Probability is a way of assigning every event a value between zero and one, with the requirement that the event made up of all possible results be assigned a value of one, the probability that any one of the events, or will occur is 5/6. This is the same as saying that the probability of event is 5/6 and this event encompasses the possibility of any number except five being rolled. The mutually exclusive event has a probability of 1/6, and the event has a probability of 1, discrete probability theory deals with events that occur in countable sample spaces. Modern definition, The modern definition starts with a finite or countable set called the sample space, which relates to the set of all possible outcomes in classical sense, denoted by Ω
Probability theory
–
The normal distribution, a continuous probability distribution.
Probability theory
–
The Poisson distribution, a discrete probability distribution.
2.
Glossary of probability and statistics
–
The following is a glossary of terms used in the mathematical sciences statistics and probability. Alternative hypothesis atomic event Another name for elementary event bar chart bias 1, a sample that is not representative of the population 2. For example, how will my headache feel if I take aspirin, causal studies may be either experimental or observational. Conditional probability is written P, and is read the probability of A, given B confidence interval In inferential statistics, a CI is a range of plausible values for the population mean. For example, based on a study of sleep habits among 100 people and this is different from the sample mean, which can be measured directly. Confidence level Also known as a coefficient, the confidence level indicates the probability that the confidence interval captures the true population mean. For example, an interval with a 95 percent confidence level has a 95 percent chance of capturing the population mean. Technically, this means that, if the experiment were repeated many times,95 percent of the CIs would contain the population mean. Continuous variable correlation Also called correlation coefficient, a measure of the strength of linear relationship between two random variables. An example is the Pearson product-moment correlation coefficient, which is found by dividing the covariance of the two variables by the product of their standard deviations. The mean can be used as an expected value The sum of the probability of each possible outcome of the experiment multiplied by its payoff. Thus, it represents the amount one expects to win per bet if bets with identical odds are repeated many times. For example, the value of a six-sided die roll is 3.5. The concept is similar to the mean, the joint probability of A and B is written P or P. kurtosis A measure of the peakedness of the probability distribution of a real-valued random variable. For example, imagine pulling a ball with the number k from a bag of n balls. The marginal probability of A is written P, contrast with conditional probability mean 1. The expected value of a random variable 2, think of the result of a series of coin-flips. For example, if one wanted to test whether light has an effect on sleep and it is often symbolized as H0
Glossary of probability and statistics
–
Statistics
3.
Notation in probability and statistics
–
Probability theory and statistics have some commonly used conventions, in addition to standard mathematical notation and mathematical symbols. Random variables are written in upper case roman letters, X, Y. Particular realizations of a variable are written in corresponding lower case letters. For example x1, x2, …, xn could be a sample corresponding to the random variable X, P or P indicates the probability that events A and B both occur. P or P indicates the probability of either event A or event B occurring, σ-algebras are usually written with upper case calligraphic Probability density functions and probability mass functions are denoted by lower case letters, e. g. f. Cumulative distribution functions are denoted by upper case letters, e. g. F. e, greek letters are commonly used to denote unknown parameters. A tilde denotes has the probability distribution of, placing a hat, or caret, over a true parameter denotes an estimator of it, e. g. θ ^ is an estimator for θ. The arithmetic mean of a series of values x1, x2, xn is often denoted by placing an overbar over the symbol, e. g. x ¯, pronounced x bar. The α-level upper critical value of a probability distribution is the value exceeded with probability α, that is, column vectors are usually denoted by boldface lower case letters, e. g. x. The transpose operator is denoted by either a superscript T or a prime symbol, a row vector is written as the transpose of a column vector, e. g. xT or x′. Common abbreviations include, a. e. almost everywhere a. s. almost surely cdf cumulative distribution function cmf cumulative mass function df degrees of freedom i. i. d. COPSS Committee on Symbols and Notation, The American Statistician,19, 12–14, doi,10. 2307/2681417, JSTOR2681417 Earliest Uses of Symbols in Probability and Statistics, maintained by Jeff Miller
Notation in probability and statistics
–
Statistics
4.
Determinism
–
Determinism is the philosophical position that for every event there exist conditions that could cause no other event. There are many determinisms, depending on what pre-conditions are considered to be determinative of an event or action, deterministic theories throughout the history of philosophy have sprung from diverse and sometimes overlapping motives and considerations. Some forms of determinism can be tested with ideas from physics. The opposite of determinism is some kind of indeterminism, Determinism is often contrasted with free will. Determinism often is taken to mean causal determinism, which in physics is known as cause-and-effect and it is the concept that events within a given paradigm are bound by causality in such a way that any state is completely determined by prior states. This meaning can be distinguished from varieties of determinism mentioned below. Numerous historical debates involve many philosophical positions and varieties of determinism and they include debates concerning determinism and free will, technically denoted as compatibilistic and incompatibilistic. Determinism should not be confused with self-determination of human actions by reasons, motives, Determinism rarely requires that perfect prediction be practically possible. However, causal determinism is a broad term to consider that ones deliberations, choices. Causal determinism proposes that there is a chain of prior occurrences stretching back to the origin of the universe. The relation between events may not be specified, nor the origin of that universe, causal determinists believe that there is nothing in the universe that is uncaused or self-caused. Historical determinism can also be synonymous with causal determinism, causal determinism has also been considered more generally as the idea that everything that happens or exists is caused by antecedent conditions. Yet they can also be considered metaphysical of origin. Nomological determinism is the most common form of causal determinism and it is the notion that the past and the present dictate the future entirely and necessarily by rigid natural laws, that every occurrence results inevitably from prior events. Quantum mechanics and various interpretations thereof pose a challenge to this view. Nomological determinism is sometimes illustrated by the experiment of Laplaces demon. Nomological determinism is sometimes called scientific determinism, although that is a misnomer, physical determinism is generally used synonymously with nomological determinism. Necessitarianism is closely related to the causal determinism described above and it is a metaphysical principle that denies all mere possibility, there is exactly one way for the world to be. Leucippus claimed there were no uncaused events, and that occurs for a reason
Determinism
–
Many philosophical theories of determinism frame themselves with the idea that reality follows a sort of predetermined path
Determinism
–
Adequate determinism focuses on the fact that, even without a full understanding of microscopic physics, we can predict the distribution of 1000 coin tosses
Determinism
–
Nature and nurture interact in humans. A scientist looking at a sculpture after some time does not ask whether we are seeing the effects of the starting materials or of environmental influences.
Determinism
–
A technological determinist might suggest that technology like the mobile phone is the greatest factor shaping human civilization.
5.
Hypothesis
–
A hypothesis is a proposed explanation for a phenomenon. For a hypothesis to be a scientific hypothesis, the method requires that one can test it. Scientists generally base scientific hypotheses on previous observations that cannot satisfactorily be explained with the scientific theories. Even though the hypothesis and theory are often used synonymously. A working hypothesis is a provisionally accepted hypothesis proposed for further research, P is the assumption in a What If question. Remember, the way that you prove an implication is by assuming the hypothesis, --Philip Wadler In its ancient usage, hypothesis referred to a summary of the plot of a classical drama. The English word hypothesis comes from the ancient Greek ὑπόθεσις word hupothesis, in Platos Meno, Socrates dissects virtue with a method used by mathematicians, that of investigating from a hypothesis. In this sense, hypothesis refers to an idea or to a convenient mathematical approach that simplifies cumbersome calculations. In common usage in the 21st century, a hypothesis refers to an idea whose merit requires evaluation. For proper evaluation, the framer of a hypothesis needs to define specifics in operational terms, a hypothesis requires more work by the researcher in order to either confirm or disprove it. In due course, a hypothesis may become part of a theory or occasionally may grow to become a theory itself. Normally, scientific hypotheses have the form of a mathematical model, in entrepreneurial science, a hypothesis is used to formulate provisional ideas within a business setting. The formulated hypothesis is then evaluated where either the hypothesis is proven to be true or false through a verifiability- or falsifiability-oriented Experiment, any useful hypothesis will enable predictions by reasoning. It might predict the outcome of an experiment in a setting or the observation of a phenomenon in nature. The prediction may also invoke statistics and only talk about probabilities, other philosophers of science have rejected the criterion of falsifiability or supplemented it with other criteria, such as verifiability or coherence. The scientific method involves experimentation, to test the ability of some hypothesis to adequately answer the question under investigation. In contrast, unfettered observation is not as likely to raise unexplained issues or open questions in science, a thought experiment might also be used to test the hypothesis as well. In framing a hypothesis, the investigator must not currently know the outcome of a test or that it remains reasonably under continuing investigation, only in such cases does the experiment, test or study potentially increase the probability of showing the truth of a hypothesis
Hypothesis
–
Andreas Cellarius hypothesis, demonstrating the planetary motions in eccentric and epicyclical orbits
6.
Scientific theory
–
Established scientific theories have withstood rigorous scrutiny and are a comprehensive form of scientific knowledge. It is important to note that the definition of a theory as used in the disciplines of science is significantly different from the common vernacular usage of the word theory. These different usages are comparable to the differing, and often opposing, usages of the prediction in science versus prediction in vernacular speech. The strength of a theory is related to the diversity of phenomena it can explain. In certain cases, the less-accurate unmodified scientific theory can still be treated as an if it is useful as an approximation under specific conditions. Scientific theories are testable and make falsifiable predictions and they describe the causal elements responsible for a particular natural phenomenon, and are used to explain and predict aspects of the physical universe or specific areas of inquiry. Scientists use theories as a foundation to further scientific knowledge. As with other forms of knowledge, scientific theories are both deductive and inductive in nature and aim for predictive power and explanatory capability. Paleontologist, evolutionary biologist, and science historian Stephen Jay Gould said, “. facts and theories are different things, not rungs in a hierarchy of increasing certainty. Theories are structures of ideas that explain and interpret facts. ”The defining characteristic of all scientific knowledge, the relevance and specificity of those predictions determine how potentially useful the theory is. A would-be theory that makes no observable predictions is not a theory at all. Predictions not sufficiently specific to be tested are similarly not useful, in both cases, the term theory is not applicable. A body of descriptions of knowledge can be called a theory if it fulfills the following criteria and it is well-supported by many independent strands of evidence, rather than a single foundation. It is consistent with preexisting experimental results and at least as accurate in its predictions as are any preexisting theories and these qualities are certainly true of such established theories as special and general relativity, quantum mechanics, plate tectonics, the modern evolutionary synthesis, etc. It is among the most parsimonious explanations, economical in the use of proposed entities or explanatory steps as per Occams razor. The United States National Academy of Sciences defines scientific theories as follows and it refers to a comprehensive explanation of some aspect of nature that is supported by a vast body of evidence. Such fact-supported theories are not guesses but reliable accounts of the real world, the theory of biological evolution is more than just a theory. It is as factual an explanation of the universe as the theory of matter or the germ theory of disease
Scientific theory
–
A central prediction from a current theory: the general theory of relativity predicts the bending of light in a gravitational field. This prediction was first tested during the solar eclipse of May 1919.
Scientific theory
–
The first observation of cells, by Robert Hooke, using an early microscope. This led to the development of cell theory.
Scientific theory
–
Precession of the perihelion of Mercury (exaggerated). The deviation in Mercury's position from the Newtonian prediction is about 43 arc-seconds (about two-thirds of 1/60 of a degree) per century.
Scientific theory
–
Planets of the Solar System, with the Sun at the center. (Sizes to scale; distances and illumination not to scale.)
7.
Uncertainty
–
Uncertainty is a situation which involves imperfect and/or unknown information. However, uncertainty is an expression without a straightforward description. It applies to predictions of events, to physical measurements that are already made. Uncertainty arises in partially observable and/or stochastic environments, as well as due to ignorance and/or indolence, a state of having limited knowledge where it is impossible to exactly describe the existing state, a future outcome, or more than one possible outcome. Risk A state of uncertainty where some possible outcomes have an effect or significant loss. Measurement of risk A set of measured uncertainties where some possible outcomes are losses, and the magnitudes of those losses – this also includes loss functions over continuous variables. It will appear that a measurable uncertainty, or risk proper, if probabilities are applied to the possible outcomes using weather forecasts or even just a calibrated probability assessment, the uncertainty has been quantified. Suppose it is quantified as a 90% chance of sunshine, if there is a major, costly, outdoor event planned for tomorrow then there is a risk since there is a 10% chance of rain, and rain would be undesirable. Furthermore, if this is an event and $100,000 would be lost if it rains. These situations can be even more realistic by quantifying light rain vs. heavy rain. Some may represent the risk in this example as the expected opportunity loss or the chance of the loss multiplied by the amount of the loss and that is useful if the organizer of the event is risk neutral, which most people are not. Most would be willing to pay a premium to avoid the loss, an insurance company, for example, would compute an EOL as a minimum for any insurance coverage, then add onto that other operating costs and profit. Since many people are willing to buy insurance for many reasons, quantitative uses of the terms uncertainty and risk are fairly consistent from fields such as probability theory, actuarial science, and information theory. Some also create new terms without substantially changing the definitions of uncertainty or risk, for example, surprisal is a variation on uncertainty sometimes used in information theory. But outside of the more mathematical uses of the term, usage may vary widely, in cognitive psychology, uncertainty can be real, or just a matter of perception, such as expectations, threats, etc. Vagueness or ambiguity are sometimes described as second order uncertainty, where there is uncertainty even about the definitions of uncertain states or outcomes, the difference here is that this uncertainty is about the human definitions and concepts, not an objective fact of nature. It is usually modelled by some variation on Zadehs fuzzy logic and it has been argued that ambiguity, however, is always avoidable while uncertainty is not necessarily avoidable. Uncertainty may be purely a consequence of a lack of knowledge of obtainable facts and that is, there may be uncertainty about whether a new rocket design will work, but this uncertainty can be removed with further analysis and experimentation
Uncertainty
–
We are frequently presented with situations wherein a decision must be made when we are uncertain of exactly how to proceed.
8.
Epistemology
–
Epistemology is the branch of philosophy concerned with the theory of knowledge. Epistemology studies the nature of knowledge, justification, and the rationality of belief, the term Epistemology was first used by Scottish philosopher James Frederick Ferrier in 1854. However, according to Brett Warren, King James VI of Scotland had previously personified this philosophical concept as the character Epistemon in 1591 and this philosophical approach signified a Philomath seeking to obtain greater knowledge through epistemology with the use of theology. The dialogue was used by King James to educate society on various concepts including the history, the word epistemology is derived from the ancient Greek epistēmē meaning knowledge and the suffix -logy, meaning a logical discourse to. J. F. Ferrier coined epistemology on the model of ontology, to designate that branch of philosophy which aims to discover the meaning of knowledge, and called it the true beginning of philosophy. The word is equivalent to the concept Wissenschaftslehre, which was used by German philosophers Johann Fichte, French philosophers then gave the term épistémologie a narrower meaning as theory of knowledge. Émile Meyerson opened his Identity and Reality, written in 1908, in mathematics, it is known that 2 +2 =4, but there is also knowing how to add two numbers, and knowing a person, place, thing, or activity. Some philosophers think there is an important distinction between knowing that, knowing how, and acquaintance-knowledge, with epistemology being primarily concerned with the first of these, while these distinctions are not explicit in English, they are defined explicitly in other languages. In French, Portuguese, Spanish and Dutch to know is translated using connaître, conhecer, conocer, modern Greek has the verbs γνωρίζω and ξέρω. Italian has the verbs conoscere and sapere and the nouns for knowledge are conoscenza and sapienza, German has the verbs wissen and kennen. The verb itself implies a process, you have to go from one state to another and this verb seems to be the most appropriate in terms of describing the episteme in one of the modern European languages, hence the German name Erkenntnistheorie. The theoretical interpretation and significance of linguistic issues remains controversial. In his paper On Denoting and his later book Problems of Philosophy Bertrand Russell stressed the distinction between knowledge by description and knowledge by acquaintance, gilbert Ryle is also credited with stressing the distinction between knowing how and knowing that in The Concept of Mind. This position is essentially Ryles, who argued that a failure to acknowledge the distinction between knowledge that and knowledge how leads to infinite regress and this includes the truth, and everything else we accept as true for ourselves from a cognitive point of view. Whether someones belief is true is not a prerequisite for belief, on the other hand, if something is actually known, then it categorically cannot be false. It would not be accurate to say that he knew that the bridge was safe, because plainly it was not. By contrast, if the bridge actually supported his weight, then he might say that he had believed that the bridge was safe, whereas now, after proving it to himself, epistemologists argue over whether belief is the proper truth-bearer. Some would rather describe knowledge as a system of justified true propositions, plato, in his Gorgias, argues that belief is the most commonly invoked truth-bearer
Epistemology
–
Plato – Kant – Nietzsche
9.
Measure (mathematics)
–
In mathematical analysis, a measure on a set is a systematic way to assign a number to each suitable subset of that set, intuitively interpreted as its size. In this sense, a measure is a generalization of the concepts of length, area, for instance, the Lebesgue measure of the interval in the real numbers is its length in the everyday sense of the word – specifically,1. Technically, a measure is a function that assigns a real number or +∞ to subsets of a set X. It must further be countably additive, the measure of a subset that can be decomposed into a finite number of smaller disjoint subsets, is the sum of the measures of the smaller subsets. In general, if one wants to associate a consistent size to each subset of a set while satisfying the other axioms of a measure. This problem was resolved by defining measure only on a sub-collection of all subsets, the so-called measurable subsets and this means that countable unions, countable intersections and complements of measurable subsets are measurable. Non-measurable sets in a Euclidean space, on which the Lebesgue measure cannot be defined consistently, are complicated in the sense of being badly mixed up with their complement. Indeed, their existence is a consequence of the axiom of choice. Measure theory was developed in stages during the late 19th and early 20th centuries by Émile Borel, Henri Lebesgue, Johann Radon. The main applications of measures are in the foundations of the Lebesgue integral, in Andrey Kolmogorovs axiomatisation of probability theory, probability theory considers measures that assign to the whole set the size 1, and considers measurable subsets to be events whose probability is given by the measure. Ergodic theory considers measures that are invariant under, or arise naturally from, let X be a set and Σ a σ-algebra over X. A function μ from Σ to the real number line is called a measure if it satisfies the following properties, Non-negativity. Countable additivity, For all countable collections i =1 ∞ of pairwise disjoint sets in Σ, μ = ∑ k =1 ∞ μ One may require that at least one set E has finite measure. Then the empty set automatically has measure zero because of countable additivity, because μ = μ = μ + μ + μ + …, which implies that μ =0. If only the second and third conditions of the definition of measure above are met, the pair is called a measurable space, the members of Σ are called measurable sets. If and are two spaces, then a function f, X → Y is called measurable if for every Y-measurable set B ∈ Σ Y. A triple is called a measure space, a probability measure is a measure with total measure one – i. e. A probability space is a space with a probability measure
Measure (mathematics)
–
Informally, a measure has the property of being monotone in the sense that if A is a subset of B, the measure of A is less than or equal to the measure of B. Furthermore, the measure of the empty set is required to be 0.
10.
Mathematics
–
Mathematics is the study of topics such as quantity, structure, space, and change. There is a range of views among mathematicians and philosophers as to the exact scope, Mathematicians seek out patterns and use them to formulate new conjectures. Mathematicians resolve the truth or falsity of conjectures by mathematical proof, when mathematical structures are good models of real phenomena, then mathematical reasoning can provide insight or predictions about nature. Through the use of abstraction and logic, mathematics developed from counting, calculation, measurement, practical mathematics has been a human activity from as far back as written records exist. The research required to solve mathematical problems can take years or even centuries of sustained inquiry, rigorous arguments first appeared in Greek mathematics, most notably in Euclids Elements. Galileo Galilei said, The universe cannot be read until we have learned the language and it is written in mathematical language, and the letters are triangles, circles and other geometrical figures, without which means it is humanly impossible to comprehend a single word. Without these, one is wandering about in a dark labyrinth, carl Friedrich Gauss referred to mathematics as the Queen of the Sciences. Benjamin Peirce called mathematics the science that draws necessary conclusions, David Hilbert said of mathematics, We are not speaking here of arbitrariness in any sense. Mathematics is not like a game whose tasks are determined by arbitrarily stipulated rules, rather, it is a conceptual system possessing internal necessity that can only be so and by no means otherwise. Albert Einstein stated that as far as the laws of mathematics refer to reality, they are not certain, Mathematics is essential in many fields, including natural science, engineering, medicine, finance and the social sciences. Applied mathematics has led to entirely new mathematical disciplines, such as statistics, Mathematicians also engage in pure mathematics, or mathematics for its own sake, without having any application in mind. There is no clear line separating pure and applied mathematics, the history of mathematics can be seen as an ever-increasing series of abstractions. The earliest uses of mathematics were in trading, land measurement, painting and weaving patterns, in Babylonian mathematics elementary arithmetic first appears in the archaeological record. Numeracy pre-dated writing and numeral systems have many and diverse. Between 600 and 300 BC the Ancient Greeks began a study of mathematics in its own right with Greek mathematics. Mathematics has since been extended, and there has been a fruitful interaction between mathematics and science, to the benefit of both. Mathematical discoveries continue to be made today, the overwhelming majority of works in this ocean contain new mathematical theorems and their proofs. The word máthēma is derived from μανθάνω, while the modern Greek equivalent is μαθαίνω, in Greece, the word for mathematics came to have the narrower and more technical meaning mathematical study even in Classical times
Mathematics
–
Euclid (holding calipers), Greek mathematician, 3rd century BC, as imagined by Raphael in this detail from The School of Athens.
Mathematics
–
Greek mathematician Pythagoras (c. 570 – c. 495 BC), commonly credited with discovering the Pythagorean theorem
Mathematics
–
Leonardo Fibonacci, the Italian mathematician who established the Hindu–Arabic numeral system to the Western World
Mathematics
–
Carl Friedrich Gauss, known as the prince of mathematicians
11.
Finance
–
Finance is a field that deals with the study of investments. It includes the dynamics of assets and liabilities over time under conditions of different degrees of uncertainty, Finance can also be defined as the science of money management. Finance aims to price assets based on their level and their expected rate of return. Finance can be broken into three different sub-categories, public finance, corporate finance and personal finance. g, health and property insurance, investing and saving for retirement. Personal finance may also involve paying for a loan, or debt obligations, net worth is a persons balance sheet, calculated by adding up all assets under that persons control, minus all liabilities of the household, at one point in time. Household cash flow totals up all the sources of income within a year. From this analysis, the financial planner can determine to what degree, adequate protection, the analysis of how to protect a household from unforeseen risks. These risks can be divided into the following, liability, property, death, disability, health, some of these risks may be self-insurable, while most will require the purchase of an insurance contract. Determining how much insurance to get, at the most cost effective terms requires knowledge of the market for personal insurance, business owners, professionals, athletes and entertainers require specialized insurance professionals to adequately protect themselves. Since insurance also enjoys some tax benefits, utilizing insurance investment products may be a piece of the overall investment planning. Tax planning, typically the income tax is the single largest expense in a household, managing taxes is not a question of if you will pay taxes, but when and how much. Government gives many incentives in the form of tax deductions and credits, most modern governments use a progressive tax. Typically, as ones income grows, a marginal rate of tax must be paid. Understanding how to take advantage of the tax breaks when planning ones personal finances can make a significant impact in which it can later save you money in the long term. Investment and accumulation goals, planning how to accumulate enough money - for large purchases, major reasons to accumulate assets include, purchasing a house or car, starting a business, paying for education expenses, and saving for retirement. Achieving these goals requires projecting what they will cost, and when you need to withdraw funds that will be necessary to be able to achieve these goals, a major risk to the household in achieving their accumulation goal is the rate of price increases over time, or inflation. Using net present value calculators, the planner will suggest a combination of asset earmarking. In order to overcome the rate of inflation, the investment portfolio has to get a higher rate of return, managing these portfolio risks is most often accomplished using asset allocation, which seeks to diversify investment risk and opportunity
Finance
–
London Stock Exchange, global center of finance.
Finance
Finance
–
Wall Street, the center of American finance.
12.
Gambling
–
Gambling is the wagering of money or something of value on an event with an uncertain outcome with the primary intent of winning money and/or material goods. Gambling thus requires three elements be present, consideration, chance and prize, the term gaming in this context typically refers to instances in which the activity has been specifically permitted by law. However, this distinction is not universally observed in the English-speaking world, for instance, in the United Kingdom, the regulator of gambling activities is called the Gambling Commission. Gambling is also an international commercial activity, with the legal gambling market totaling an estimated $335 billion in 2009. In other forms, gambling can be conducted with materials which have a value, many popular games played in modern casinos originate from Europe and China. Games such as craps, baccarat, roulette, and blackjack originate from different areas of Europe, a version of keno, an ancient Chinese lottery game, is played in casinos around the world. In addition, pai gow poker, a hybrid between pai gow and poker is also played, many jurisdictions, local as well as national, either ban gambling or heavily control it by licensing the vendors. Such regulation generally leads to gambling tourism and illegal gambling in the areas where it is not allowed, there is generally legislation requiring that the odds in gaming devices are statistically random, to prevent manufacturers from making some high-payoff results impossible. Since these high-payoffs have very low probability, a bias can quite easily be missed unless the odds are checked carefully. Most jurisdictions that allow gambling require participants to be above a certain age, in some jurisdictions, the gambling age differs depending on the type of gambling. For example, in many American states one must be over 21 to enter a casino, E. g. Nonetheless, both insurance and gambling contracts are typically considered aleatory contracts under most legal systems, though they are subject to different types of regulation. Under common law, particularly English Law, a contract may not give a casino bona fide purchaser status. For case law on recovery of gambling losses where the loser had stolen the funds see Rights of owner of money as against one who won it in gambling transaction from thief. This was a plot point in a Perry Mason novel, The Case of the Singing Skirt. Religious perspectives on gambling have been mixed, ancient Hindu poems like the Gamblers Lament and the Mahabharata testify to the popularity of gambling among ancient Indians. However, the text Arthashastra recommends taxation and control of gambling, ancient Jewish authorities frowned on gambling, even disqualifying professional gamblers from testifying in court. For these social and religious reasons, most legal jurisdictions limit gambling, in at least one case, the same bishop opposing a casino has sold land to be used for its construction. Although different interpretations of law exist in the Muslim world
Gambling
–
Caravaggio, The Cardsharps, c. 1594
Gambling
–
Gamblers in the Ship of Fools, 1494
Gambling
–
Bag with 65 Inlaid Gambling Sticks, Tsimshian (Native American), 19th century, Brooklyn Museum
Gambling
–
The Caesars Palace main fountain. The statue is a copy of the ancient Winged Victory of Samothrace.
13.
Science
–
Science is a systematic enterprise that builds and organizes knowledge in the form of testable explanations and predictions about the universe. The formal sciences are often excluded as they do not depend on empirical observations, disciplines which use science, like engineering and medicine, may also be considered to be applied sciences. However, during the Islamic Golden Age foundations for the method were laid by Ibn al-Haytham in his Book of Optics. In the 17th and 18th centuries, scientists increasingly sought to formulate knowledge in terms of physical laws, over the course of the 19th century, the word science became increasingly associated with the scientific method itself as a disciplined way to study the natural world. It was during this time that scientific disciplines such as biology, chemistry, Science in a broad sense existed before the modern era and in many historical civilizations. Modern science is distinct in its approach and successful in its results, Science in its original sense was a word for a type of knowledge rather than a specialized word for the pursuit of such knowledge. In particular, it was the type of knowledge which people can communicate to each other, for example, knowledge about the working of natural things was gathered long before recorded history and led to the development of complex abstract thought. This is shown by the construction of calendars, techniques for making poisonous plants edible. For this reason, it is claimed these men were the first philosophers in the strict sense and they were mainly speculators or theorists, particularly interested in astronomy. In contrast, trying to use knowledge of nature to imitate nature was seen by scientists as a more appropriate interest for lower class artisans. A clear-cut distinction between formal and empirical science was made by the pre-Socratic philosopher Parmenides, although his work Peri Physeos is a poem, it may be viewed as an epistemological essay on method in natural science. Parmenides ἐὸν may refer to a system or calculus which can describe nature more precisely than natural languages. Physis may be identical to ἐὸν and he criticized the older type of study of physics as too purely speculative and lacking in self-criticism. He was particularly concerned that some of the early physicists treated nature as if it could be assumed that it had no intelligent order, explaining things merely in terms of motion and matter. The study of things had been the realm of mythology and tradition, however. Aristotle later created a less controversial systematic programme of Socratic philosophy which was teleological and he rejected many of the conclusions of earlier scientists. For example, in his physics, the sun goes around the earth, each thing has a formal cause and final cause and a role in the rational cosmic order. Motion and change is described as the actualization of potentials already in things, while the Socratics insisted that philosophy should be used to consider the practical question of the best way to live for a human being, they did not argue for any other types of applied science
Science
–
Maize, known in some English-speaking countries as corn, is a large grain plant domesticated by indigenous peoples in Mesoamerica in prehistoric times.
Science
–
The scale of the universe mapped to the branches of science and the hierarchy of science.
Science
–
Aristotle, 384 BC – 322 BC, - one of the early figures in the development of the scientific method.
Science
–
Galen (129—c.216) noted the optic chiasm is X-shaped. (Engraving from Vesalius, 1543)
14.
Physics
–
Physics is the natural science that involves the study of matter and its motion and behavior through space and time, along with related concepts such as energy and force. One of the most fundamental disciplines, the main goal of physics is to understand how the universe behaves. Physics is one of the oldest academic disciplines, perhaps the oldest through its inclusion of astronomy, Physics intersects with many interdisciplinary areas of research, such as biophysics and quantum chemistry, and the boundaries of physics are not rigidly defined. New ideas in physics often explain the mechanisms of other sciences while opening new avenues of research in areas such as mathematics. Physics also makes significant contributions through advances in new technologies that arise from theoretical breakthroughs, the United Nations named 2005 the World Year of Physics. Astronomy is the oldest of the natural sciences, the stars and planets were often a target of worship, believed to represent their gods. While the explanations for these phenomena were often unscientific and lacking in evidence, according to Asger Aaboe, the origins of Western astronomy can be found in Mesopotamia, and all Western efforts in the exact sciences are descended from late Babylonian astronomy. The most notable innovations were in the field of optics and vision, which came from the works of many scientists like Ibn Sahl, Al-Kindi, Ibn al-Haytham, Al-Farisi and Avicenna. The most notable work was The Book of Optics, written by Ibn Al-Haitham, in which he was not only the first to disprove the ancient Greek idea about vision, but also came up with a new theory. In the book, he was also the first to study the phenomenon of the pinhole camera, many later European scholars and fellow polymaths, from Robert Grosseteste and Leonardo da Vinci to René Descartes, Johannes Kepler and Isaac Newton, were in his debt. Indeed, the influence of Ibn al-Haythams Optics ranks alongside that of Newtons work of the same title, the translation of The Book of Optics had a huge impact on Europe. From it, later European scholars were able to build the devices as what Ibn al-Haytham did. From this, such important things as eyeglasses, magnifying glasses, telescopes, Physics became a separate science when early modern Europeans used experimental and quantitative methods to discover what are now considered to be the laws of physics. Newton also developed calculus, the study of change, which provided new mathematical methods for solving physical problems. The discovery of new laws in thermodynamics, chemistry, and electromagnetics resulted from greater research efforts during the Industrial Revolution as energy needs increased, however, inaccuracies in classical mechanics for very small objects and very high velocities led to the development of modern physics in the 20th century. Modern physics began in the early 20th century with the work of Max Planck in quantum theory, both of these theories came about due to inaccuracies in classical mechanics in certain situations. Quantum mechanics would come to be pioneered by Werner Heisenberg, Erwin Schrödinger, from this early work, and work in related fields, the Standard Model of particle physics was derived. Areas of mathematics in general are important to this field, such as the study of probabilities, in many ways, physics stems from ancient Greek philosophy
Physics
–
Further information: Outline of physics
Physics
–
Ancient Egyptian astronomy is evident in monuments like the ceiling of Senemut's tomb from the Eighteenth Dynasty of Egypt.
Physics
–
Sir Isaac Newton (1643–1727), whose laws of motion and universal gravitation were major milestones in classical physics
Physics
–
Albert Einstein (1879–1955), whose work on the photoelectric effect and the theory of relativity led to a revolution in 20th century physics
15.
Game theory
–
Game theory is the study of mathematical models of conflict and cooperation between intelligent rational decision-makers. Game theory is used in economics, political science, and psychology, as well as logic, computer science. Originally, it addressed zero-sum games, in one persons gains result in losses for the other participants. Today, game theory applies to a range of behavioral relations, and is now an umbrella term for the science of logical decision making in humans, animals. Modern game theory began with the idea regarding the existence of equilibria in two-person zero-sum games. Von Neumanns original proof used Brouwer fixed-point theorem on continuous mappings into compact convex sets and his paper was followed by the 1944 book Theory of Games and Economic Behavior, co-written with Oskar Morgenstern, which considered cooperative games of several players. The second edition of this provided an axiomatic theory of expected utility. This theory was developed extensively in the 1950s by many scholars, Game theory was later explicitly applied to biology in the 1970s, although similar developments go back at least as far as the 1930s. Game theory has been recognized as an important tool in many fields. With the Nobel Memorial Prize in Economic Sciences going to game theorist Jean Tirole in 2014, John Maynard Smith was awarded the Crafoord Prize for his application of game theory to biology. Early discussions of examples of two-person games occurred long before the rise of modern, the first known discussion of game theory occurred in a letter written by Charles Waldegrave, an active Jacobite, and uncle to James Waldegrave, a British diplomat, in 1713. In this letter, Waldegrave provides a mixed strategy solution to a two-person version of the card game le Her. James Madison made what we now recognize as an analysis of the ways states can be expected to behave under different systems of taxation. In 1913 Ernst Zermelo published Über eine Anwendung der Mengenlehre auf die Theorie des Schachspiels and it proved that the optimal chess strategy is strictly determined. This paved the way for more general theorems, the Danish mathematician Zeuthen proved that the mathematical model had a winning strategy by using Brouwers fixed point theorem. In his 1938 book Applications aux Jeux de Hasard and earlier notes, Borel conjectured that non-existence of mixed-strategy equilibria in two-person zero-sum games would occur, a conjecture that was proved false. Game theory did not really exist as a field until John von Neumann published a paper in 1928. Von Neumanns original proof used Brouwers fixed-point theorem on continuous mappings into compact convex sets and his paper was followed by his 1944 book Theory of Games and Economic Behavior co-authored with Oskar Morgenstern
Game theory
–
An extensive form game
16.
Complex systems
–
Complex systems present problems both in mathematical modelling and philosophical foundations. The subject is also called complex systems theory, complexity science, study of complex systems, complex networks, network science. Such a systems approach is used in computer science, biology, economics, physics, chemistry, architecture. A variety of abstract theoretical complex systems is studied as a field of mathematics, the key problems of complex systems are difficulties with their formal modelling and simulation. From such a perspective, in different research contexts complex systems are defined on the basis of their different attributes, since all complex systems have many interconnected components, the science of networks and network theory are important and useful tools for the study of complex systems. A theory for the resilience of system of systems represented by a network of interdependent networks was developed by Buldyrev et al, a consensus regarding a single universal definition of complex system does not yet exist. For systems that are less usefully represented with various other kinds of narratives. The study of complex system models is used for many scientific questions poorly suited to the traditional mechanistic conception provided by science. Linear systems represent the class of systems for which general techniques for stability control. However, many systems are inherently complex systems in terms of the definition above. This debate would notably lead economists, politicians and other parties to explore the question of computational complexity, gregory Bateson played a key role in establishing the connection between anthropology and systems theory, he recognized that the interactive parts of cultures function much like ecosystems. The first research institute focused on systems, the Santa Fe Institute, was founded in 1984. Today, there are over 50 institutes and research focusing on complex systems. The traditional approach to dealing with complexity is to reduce or constrain it, typically, this involves compartmentalisation, dividing a large system into separate parts. Organizations, for instance, divide their work into departments that deal with separate issues. Engineering systems are designed using modular components. However, modular designs become susceptible to failure when issues arise that bridge the divisions, as projects and acquisitions become increasingly complex, companies and governments are challenged to find effective ways to manage mega-acquisitions such as the Army Future Combat Systems. Acquisitions such as the FCS rely on a web of interrelated parts which interact unpredictably, over the last decades, within the emerging field of complexity economics new predictive tools have been developed to explain economic growth
Complex systems
–
Complex systems
Complex systems
–
A Braitenberg simulation, programmed in breve, an artificial life simulator
Complex systems
–
A complex adaptive system model
Complex systems
–
This is a schematic representation of three types of mathematical models of complex systems with the level of their mechanistic understanding.
17.
Probability interpretations
–
The word probability has been used in a variety of ways since it was first applied to the mathematical study of games of chance. Does probability measure the real, physical tendency of something to occur or is it a measure of how strongly one believes it will occur, in answering such questions, mathematicians interpret the probability values of probability theory. There are two categories of probability interpretations which can be called physical and evidential probabilities. Physical probabilities, which are also called objective or frequency probabilities, are associated with random physical systems such as wheels, rolling dice. In such systems, a type of event tends to occur at a persistent rate, or relative frequency. Physical probabilities either explain, or are invoked to explain, these stable frequencies, the two main kinds of theory of physical probability are frequentist accounts and propensity accounts. On most accounts, evidential probabilities are considered to be degrees of belief, the four main evidential interpretations are the classical interpretation, the subjective interpretation, the epistemic or inductive interpretation and the logical interpretation. There are also interpretations of probability covering groups, which are often labelled as intersubjective. Some interpretations of probability are associated with approaches to inference, including theories of estimation. The physical interpretation, for example, is taken by followers of frequentist statistical methods, such as Ronald Fisher, Jerzy Neyman and this article, however, focuses on the interpretations of probability rather than theories of statistical inference. The terminology of this topic is rather confusing, in part because probabilities are studied within a variety of academic fields, the word frequentist is especially tricky. To philosophers it refers to a theory of physical probability. To scientists, on the hand, frequentist probability is just another name for physical probability. Those who promote Bayesian inference view frequentist statistics as an approach to inference that recognises only physical probabilities. It is unanimously agreed that statistics depends somehow on probability, but, as to what probability is and how it is connected with statistics, there has seldom been such complete disagreement and breakdown of communication since the Tower of Babel. Doubtless, much of the disagreement is merely terminological and would disappear under sufficiently sharp analysis, the philosophy of probability presents problems chiefly in matters of epistemology and the uneasy interface between mathematical concepts and ordinary language as it is used by non-mathematicians. Probability theory is a field of study in mathematics. The first attempt at mathematical rigour in the field of probability, developed from studies of games of chance it states that probability is shared equally between all the possible outcomes, provided these outcomes can be deemed equally likely
Probability interpretations
–
The classical definition of probability works well for situations with only a finite number of equally-likely outcomes.
Probability interpretations
–
For frequentists, the probability of the ball landing in any pocket can be determined only by repeated trials in which the observed result converges to the underlying probability in the long run.
Probability interpretations
–
Gambling odds reflect the average bettor's 'degree of belief' in the outcome.
18.
Experiment
–
An experiment is a procedure carried out to support, refute, or validate a hypothesis. Experiments provide insight into cause-and-effect by demonstrating what outcome occurs when a particular factor is manipulated, experiments vary greatly in goal and scale, but always rely on repeatable procedure and logical analysis of the results. There also exists natural experimental studies, a child may carry out basic experiments to understand gravity, while teams of scientists may take years of systematic investigation to advance their understanding of a phenomenon. Experiments and other types of activities are very important to student learning in the science classroom. Experiments can raise test scores and help a student become more engaged and interested in the material they are learning, experiments can vary from personal and informal natural comparisons, to highly controlled. Uses of experiments vary considerably between the natural and human sciences, experiments typically include controls, which are designed to minimize the effects of variables other than the single independent variable. This increases the reliability of the results, often through a comparison between control measurements and the other measurements, scientific controls are a part of the scientific method. Ideally, all variables in an experiment are controlled and none are uncontrolled, in such an experiment, if all controls work as expected, it is possible to conclude that the experiment works as intended, and that results are due to the effect of the tested variable. In the scientific method, an experiment is a procedure that arbitrates between competing models or hypotheses. Researchers also use experimentation to test existing theories or new hypotheses to support or disprove them, an experiment usually tests a hypothesis, which is an expectation about how a particular process or phenomenon works. However, an experiment may also aim to answer a question, without a specific expectation about what the experiment reveals. If an experiment is conducted, the results usually either support or disprove the hypothesis. According to some philosophies of science, an experiment can never prove a hypothesis, on the other hand, an experiment that provides a counterexample can disprove a theory or hypothesis. An experiment must also control the possible confounding factors—any factors that would mar the accuracy or repeatability of the experiment or the ability to interpret the results, confounding is commonly eliminated through scientific controls and/or, in randomized experiments, through random assignment. In engineering and the sciences, experiments are a primary component of the scientific method. They are used to test theories and hypotheses about how physical processes work under particular conditions, typically, experiments in these fields focus on replication of identical procedures in hopes of producing identical results in each replication. In medicine and the sciences, the prevalence of experimental research varies widely across disciplines. In contrast to norms in the sciences, the focus is typically on the average treatment effect or another test statistic produced by the experiment
Experiment
–
Even very young children perform rudimentary experiments to learn about the world and how things work.
Experiment
–
Original map by John Snow showing the clusters of cholera cases in the London epidemic of 1854
19.
Likelihood function
–
In statistics, a likelihood function is a function of the parameters of a statistical model given data. Likelihood functions play a key role in statistical inference, especially methods of estimating a parameter from a set of statistics, in informal contexts, likelihood is often used as a synonym for probability. In statistics, a distinction is made depending on the roles of outcomes vs. parameters, Probability is used before data are available to describe possible future outcomes given a fixed value for the parameter. Likelihood is used data are available to describe a function of a parameter for a given outcome. The likelihood of a value, θ, given outcomes x, is equal to the probability assumed for those observed outcomes given those parameter values. The likelihood function is defined differently for discrete and continuous probability distributions, let X be a random variable with a discrete probability distribution p depending on a parameter θ. Then the function L = p θ = P θ considered as a function of θ, is called the likelihood function, let X be a random variable following an absolutely continuous probability distribution with density function f depending on a parameter θ. Then the function L = f θ, considered as a function of θ, is called the likelihood function. Sometimes the density function for the x of X for the parameter value θ is written as f. This provides a function for any probability model with all distributions, whether discrete, absolutely continuous. For many applications, the logarithm of the likelihood function. For example, some functions are for the parameters that explain a collection of statistically independent observations. In such a situation, the likelihood function factors into a product of individual likelihood functions, the logarithm of this product is a sum of individual logarithms, and the derivative of a sum of terms is often easier to compute than the derivative of a product. In addition, several common distributions have likelihood functions that contain products of factors involving exponentiation, the logarithm of such a function is a sum of products, again easier to differentiate than the original function. Edwards established the basis for use of the log-likelihood ratio as a measure of relative support for one hypothesis against another. The support function is then the logarithm of the likelihood function. Both terms are used in phylogenetics but were not adopted in a treatment of the topic of statistical evidence. The gamma distribution has two parameters α and β, the likelihood function is L = β α Γ x α −1 e − β x
Likelihood function
–
The likelihood function for estimating the probability of a coin landing heads-up without prior knowledge after observing HH
20.
Likelihood
–
In statistics, a likelihood function is a function of the parameters of a statistical model given data. Likelihood functions play a key role in statistical inference, especially methods of estimating a parameter from a set of statistics, in informal contexts, likelihood is often used as a synonym for probability. In statistics, a distinction is made depending on the roles of outcomes vs. parameters, Probability is used before data are available to describe possible future outcomes given a fixed value for the parameter. Likelihood is used data are available to describe a function of a parameter for a given outcome. The likelihood of a value, θ, given outcomes x, is equal to the probability assumed for those observed outcomes given those parameter values. The likelihood function is defined differently for discrete and continuous probability distributions, let X be a random variable with a discrete probability distribution p depending on a parameter θ. Then the function L = p θ = P θ considered as a function of θ, is called the likelihood function, let X be a random variable following an absolutely continuous probability distribution with density function f depending on a parameter θ. Then the function L = f θ, considered as a function of θ, is called the likelihood function. Sometimes the density function for the x of X for the parameter value θ is written as f. This provides a function for any probability model with all distributions, whether discrete, absolutely continuous. For many applications, the logarithm of the likelihood function. For example, some functions are for the parameters that explain a collection of statistically independent observations. In such a situation, the likelihood function factors into a product of individual likelihood functions, the logarithm of this product is a sum of individual logarithms, and the derivative of a sum of terms is often easier to compute than the derivative of a product. In addition, several common distributions have likelihood functions that contain products of factors involving exponentiation, the logarithm of such a function is a sum of products, again easier to differentiate than the original function. Edwards established the basis for use of the log-likelihood ratio as a measure of relative support for one hypothesis against another. The support function is then the logarithm of the likelihood function. Both terms are used in phylogenetics but were not adopted in a treatment of the topic of statistical evidence. The gamma distribution has two parameters α and β, the likelihood function is L = β α Γ x α −1 e − β x
Likelihood
–
The likelihood function for estimating the probability of a coin landing heads-up without prior knowledge after observing HH
21.
Witness
–
A witness is someone who has, who claims to have, or is thought, by someone with authority to compel testimony, to have knowledge relevant to an event or other matter of interest. A percipient witness or eyewitness is one who testifies what they perceived through his or her senses, a hearsay witness is one who testifies what someone else said or wrote. In most court proceedings there are limitations on when hearsay evidence is admissible. Such limitations do not apply to grand jury investigations, many administrative proceedings, also some types of statements are not deemed to be hearsay and are not subject to such limitations. An expert witness may or may not also be a percipient witness, a reputation witness is one who testifies about the reputation of a person or business entity, when reputation is material to the dispute at issue. Sometimes the testimony is provided in public or in a confidential setting, although informally a witness includes whoever perceived the event, in law, a witness is different from an informant. A confidential informant is someone who claimed to have witnessed an event or have hearsay information, the information from the confidential informant may have been used by a police officer or other official acting as a hearsay witness to obtain a search warrant. A subpoena commands a person to appear and it is used to compel the testimony of a witness in a trial. Usually, it can be issued by a judge or by the representing the plaintiff or the defendant in a civil trial or by the prosecutor or the defense attorney in a criminal proceeding. In many jurisdictions, it is compulsory to comply, to take an oath, in a court proceeding, a witness may be called by either the prosecution or the defense. The side that calls the witness first asks questions, in what is called direct examination, the opposing side then may ask their own questions in what is called cross-examination. In some cases, redirect examination may then be used by the side called the witness. Recalling a witness means calling a witness, who has given testimony in a proceeding. Witness are usually permitted to testify to what they experienced first hand. In most cases, they may not testify about something they were told and this restriction does not apply to expert witnesses. Expert witnesses, however, may only testify in the area of their expertise, Eyewitness testimony is generally presumed to be more reliable than circumstantial evidence. Studies have shown, however, that individual, separate witness testimony is often flawed and this can occur because of flaws in Eyewitness identification, or because a witness is lying. One study involved an experiment, in which subjects acted as jurors in a criminal case, jurors heard a description of a robbery-murder, then a prosecution argument, and then an argument for the defense
Witness
–
Heinrich Buscher (de) as a witness during the Nuremberg Trials
22.
Legal case
–
Legal Case was an Irish-bred British-trained Thoroughbred racehorse and sire. He was never as good again, but did win the Premio Roma in 1990, after his retirement from racing he had some success as a breeding stallion in Brazil. Legal Case was a bay horse with no white markings bred in Ireland by Ovidstown Investments Ltd and he was sired by the dual Prix de lArc de Triomphe winner Alleged out of the mare Maryinsky. Alleged was a stallion, and a strong influence for stamina, his best winners included Miss Alleged, Shantou, Law Society. Maryinsky won two races at Del Mar racetrack in 1980. Apart from Legal Case, Maryinsky also produced La Sky, the dam of the Epsom Oaks winner Love Divine who in turn produced the St Leger winner Sixties Icon. During his racing career Legal Case was owned by the businessman Sir Gordon White, Legal Case was unraced as a two-year-old and did not appear on a racecourse until June 1989, when he contested a maiden race over eight and a half furlongs at Beverley Racecourse. A month later he was ridden by Frankie Dettori when he started 2/9 for a race at Windsor Racecourse. Cochrane regained the ride when Legal Case was moved up in class for the Listed Winter Hill Stakes at Windsor in August in which he was matched against older horses for the first time. He started favourite but was three lengths by the Michael Stoute-trained colt Dolpour, with Opening Verse finishing fifth of the seven runners. In September Legal Case was moved up to Group Three class for the Select Stakes over ten furlongs at Goodwood Racecourse, ridden by Dettori he started the 7/4 favourite against four opponents. After being restrained in the early stages he took the lead a furlong out and drew away to win by four lengths from Greenwich Papillon with Indian Queen three lengths back in third place. The colt was moved up to the highest level when he was sent to France to contest the 68th running of the Prix de lArc de Triomphe over 2400 metres at Longchamp Racecourse on 8 October. Less than two weeks after his run at Longchamp, Legal Case, ridden by Cochrane, was one of eleven horses to contest the Champion Stakes over ten furlongs at Newmarket Racecourse. Dolpour was made favourite on 4/1 with Legal Case 5/1 second choice in the betting alongside the four-year-old Ile de Chypre, the other contenders included the Dewhurst Stakes winner Scenic, the improving handicapper Braashee, the Royal Lodge Stakes winner High Estate and Ile de Nisky. Ile de Chypre led from the start, with Legal Case being restrained towards the rear of the field before making progress in the last quarter mile on the stands side. Inside the final furlong the three-year-olds Dolpour, Legal Case and Scenic moved up to challenge Ile de Chypre, although Scenic was squeezed for room, the final strides saw Dolpour, Ile de Chypre and Legal Case racing neck-and-neck before crossing the line together. After a photo finish, Legal Case was declared the winner by a head from Dolpour, in 1990 Dettori took over from Cochrane as Cumanis stable jockey
Legal case
23.
Europe
–
Europe is a continent that comprises the westernmost part of Eurasia. Europe is bordered by the Arctic Ocean to the north, the Atlantic Ocean to the west, yet the non-oceanic borders of Europe—a concept dating back to classical antiquity—are arbitrary. Europe covers about 10,180,000 square kilometres, or 2% of the Earths surface, politically, Europe is divided into about fifty sovereign states of which the Russian Federation is the largest and most populous, spanning 39% of the continent and comprising 15% of its population. Europe had a population of about 740 million as of 2015. Further from the sea, seasonal differences are more noticeable than close to the coast, Europe, in particular ancient Greece, was the birthplace of Western civilization. The fall of the Western Roman Empire, during the period, marked the end of ancient history. Renaissance humanism, exploration, art, and science led to the modern era, from the Age of Discovery onwards, Europe played a predominant role in global affairs. Between the 16th and 20th centuries, European powers controlled at times the Americas, most of Africa, Oceania. The Industrial Revolution, which began in Great Britain at the end of the 18th century, gave rise to economic, cultural, and social change in Western Europe. During the Cold War, Europe was divided along the Iron Curtain between NATO in the west and the Warsaw Pact in the east, until the revolutions of 1989 and fall of the Berlin Wall. In 1955, the Council of Europe was formed following a speech by Sir Winston Churchill and it includes all states except for Belarus, Kazakhstan and Vatican City. Further European integration by some states led to the formation of the European Union, the EU originated in Western Europe but has been expanding eastward since the fall of the Soviet Union in 1991. The European Anthem is Ode to Joy and states celebrate peace, in classical Greek mythology, Europa is the name of either a Phoenician princess or of a queen of Crete. The name contains the elements εὐρύς, wide, broad and ὤψ eye, broad has been an epithet of Earth herself in the reconstructed Proto-Indo-European religion and the poetry devoted to it. For the second part also the divine attributes of grey-eyed Athena or ox-eyed Hera. The same naming motive according to cartographic convention appears in Greek Ανατολή, Martin Litchfield West stated that phonologically, the match between Europas name and any form of the Semitic word is very poor. Next to these there is also a Proto-Indo-European root *h1regʷos, meaning darkness. Most major world languages use words derived from Eurṓpē or Europa to refer to the continent, in some Turkic languages the originally Persian name Frangistan is used casually in referring to much of Europe, besides official names such as Avrupa or Evropa
Europe
–
Reconstruction of Herodotus ' world map
Europe
Europe
–
A medieval T and O map from 1472 showing the three continents as domains of the sons of Noah — Asia to Sem (Shem), Europe to Iafeth (Japheth), and Africa to Cham (Ham)
Europe
–
Early modern depiction of Europa regina ('Queen Europe') and the mythical Europa of the 8th century BC.
24.
Italians
–
Italians are a nation and ethnic group native to Italy who share a common culture, ancestry and speak the Italian language as a native tongue. The majority of Italian nationals are speakers of Standard Italian. Italians have greatly influenced and contributed to the arts and music, science, technology, cuisine, sports, fashion, jurisprudence, banking, Italian people are generally known for their localism and their attention to clothing and family values. The term Italian is at least 3,000 years old and has a history that goes back to pre-Roman Italy. According to one of the common explanations, the term Italia, from Latin, Italia, was borrowed through Greek from the Oscan Víteliú. The bull was a symbol of the southern Italic tribes and was often depicted goring the Roman wolf as a defiant symbol of free Italy during the Social War. Greek historian Dionysius of Halicarnassus states this account together with the legend that Italy was named after Italus, mentioned also by Aristotle and Thucydides. The Etruscan civilization reached its peak about the 7th century BC, but by 509 BC, when the Romans overthrew their Etruscan monarchs, its control in Italy was on the wane. By 350 BC, after a series of wars between Greeks and Etruscans, the Latins, with Rome as their capital, gained the ascendancy by 272 BC, and they managed to unite the entire Italian peninsula. This period of unification was followed by one of conquest in the Mediterranean, in the course of the century-long struggle against Carthage, the Romans conquered Sicily, Sardinia and Corsica. Finally, in 146 BC, at the conclusion of the Third Punic War, with Carthage completely destroyed and its inhabitants enslaved, octavian, the final victor, was accorded the title of Augustus by the Senate and thereby became the first Roman emperor. After two centuries of rule, in the 3rd century AD, Rome was threatened by internal discord and menaced by Germanic and Asian invaders. Emperor Diocletians administrative division of the empire into two parts in 285 provided only temporary relief, it became permanent in 395, in 313, Emperor Constantine accepted Christianity, and churches thereafter rose throughout the empire. However, he moved his capital from Rome to Constantinople. The last Western emperor, Romulus Augustulus, was deposed in 476 by a Germanic foederati general in Italy and his defeat marked the end of the western part of the Roman Empire. During most of the period from the fall of Rome until the Kingdom of Italy was established in 1861, Odoacer ruled well for 13 years after gaining control of Italy in 476. Then he was attacked and defeated by Theodoric, the king of another Germanic tribe, Theodoric and Odoacer ruled jointly until 493, when Theodoric murdered Odoacer. Theodoric continued to rule Italy with an army of Ostrogoths and a government that was mostly Italian, after the death of Theodoric in 526, the kingdom began to grow weak
Italians
–
Amerigo Vespucci, the notable geographer and traveller from whose name the word America is derived.
Italians
Italians
–
Christopher Columbus, the discoverer of the New World.
Italians
–
Laura Bassi, the first chairwoman of a university in a scientific field of studies.
25.
Blaise Pascal
–
Blaise Pascal was a French mathematician, physicist, inventor, writer and Christian philosopher. He was a prodigy who was educated by his father. Pascal also wrote in defence of the scientific method, in 1642, while still a teenager, he started some pioneering work on calculating machines. After three years of effort and 50 prototypes, he built 20 finished machines over the following 10 years, following Galileo Galilei and Torricelli, in 1647, he rebutted Aristotles followers who insisted that nature abhors a vacuum. Pascals results caused many disputes before being accepted, in 1646, he and his sister Jacqueline identified with the religious movement within Catholicism known by its detractors as Jansenism. Following a religious experience in late 1654, he began writing works on philosophy. His two most famous works date from this period, the Lettres provinciales and the Pensées, the set in the conflict between Jansenists and Jesuits. In that year, he wrote an important treatise on the arithmetical triangle. Between 1658 and 1659 he wrote on the cycloid and its use in calculating the volume of solids, Pascal had poor health, especially after the age of 18, and he died just two months after his 39th birthday. Pascal was born in Clermont-Ferrand, which is in Frances Auvergne region and he lost his mother, Antoinette Begon, at the age of three. His father, Étienne Pascal, who also had an interest in science and mathematics, was a local judge, Pascal had two sisters, the younger Jacqueline and the elder Gilberte. In 1631, five years after the death of his wife, the newly arrived family soon hired Louise Delfault, a maid who eventually became an instrumental member of the family. Étienne, who never remarried, decided that he alone would educate his children, for they all showed extraordinary intellectual ability, the young Pascal showed an amazing aptitude for mathematics and science. Particularly of interest to Pascal was a work of Desargues on conic sections and it states that if a hexagon is inscribed in a circle then the three intersection points of opposite sides lie on a line. Pascals work was so precocious that Descartes was convinced that Pascals father had written it, in France at that time offices and positions could be—and were—bought and sold. In 1631 Étienne sold his position as president of the Cour des Aides for 65,665 livres. The money was invested in a government bond which provided, if not a lavish, then certainly a comfortable income which allowed the Pascal family to move to, but in 1638 Richelieu, desperate for money to carry on the Thirty Years War, defaulted on the governments bonds. Suddenly Étienne Pascals worth had dropped from nearly 66,000 livres to less than 7,300 and it was only when Jacqueline performed well in a childrens play with Richelieu in attendance that Étienne was pardoned
Blaise Pascal
–
Painting of Blaise Pascal made by François II Quesnel for Gérard Edelinck in 1691.
Blaise Pascal
–
An early Pascaline on display at the Musée des Arts et Métiers, Paris
Blaise Pascal
–
Portrait of Pascal
Blaise Pascal
–
Pascal studying the cycloid, by Augustin Pajou, 1785, Louvre
26.
Christiaan Huygens
–
Christiaan Huygens, FRS was a prominent Dutch mathematician and scientist. He is known particularly as an astronomer, physicist, probabilist and horologist, Huygens was a leading scientist of his time. His work included early telescopic studies of the rings of Saturn and the discovery of its moon Titan and he published major studies of mechanics and optics, and pioneered work on games of chance. Christiaan Huygens was born on 14 April 1629 in The Hague, into a rich and influential Dutch family, Christiaan was named after his paternal grandfather. His mother was Suzanna van Baerle and she died in 1637, shortly after the birth of Huygens sister. The couple had five children, Constantijn, Christiaan, Lodewijk, Philips, Constantijn Huygens was a diplomat and advisor to the House of Orange, and also a poet and musician. His friends included Galileo Galilei, Marin Mersenne and René Descartes, Huygens was educated at home until turning sixteen years old. He liked to play with miniatures of mills and other machines and his father gave him a liberal education, he studied languages and music, history and geography, mathematics, logic and rhetoric, but also dancing, fencing and horse riding. In 1644 Huygens had as his mathematical tutor Jan Jansz de Jonge Stampioen, Descartes was impressed by his skills in geometry. His father sent Huygens to study law and mathematics at the University of Leiden, Frans van Schooten was an academic at Leiden from 1646, and also a private tutor to Huygens and his elder brother, replacing Stampioen on the advice of Descartes. Van Schooten brought his mathematical education up to date, in introducing him to the work of Fermat on differential geometry. Constantijn Huygens was closely involved in the new College, which lasted only to 1669, Christiaan Huygens lived at the home of the jurist Johann Henryk Dauber, and had mathematics classes with the English lecturer John Pell. He completed his studies in August 1649 and he then had a stint as a diplomat on a mission with Henry, Duke of Nassau. It took him to Bentheim, then Flensburg and he took off for Denmark, visited Copenhagen and Helsingør, and hoped to cross the Øresund to visit Descartes in Stockholm. While his father Constantijn had wished his son Christiaan to be a diplomat, in political terms, the First Stadtholderless Period that began in 1650 meant that the House of Orange was not in power, removing Constantijns influence. Further, he realised that his son had no interest in such a career, Huygens generally wrote in French or Latin. While still a student at Leiden he began a correspondence with the intelligencer Mersenne. Mersenne wrote to Constantijn on his sons talent for mathematics, the letters show the early interests of Huygens in mathematics
Christiaan Huygens
–
Christiaan Huygens by Bernard Vaillant, Museum Hofwijck, Voorburg
Christiaan Huygens
–
Correspondance
Christiaan Huygens
–
The catenary in a manuscript of Huygens.
Christiaan Huygens
–
Christiaan Huygens, relief by Jean-Jacques Clérion, around 1670?
27.
Roger Cotes
–
Roger Cotes FRS was an English mathematician, known for working closely with Isaac Newton by proofreading the second edition of his famous book, the Principia, before publication. He also invented the quadrature formulas known as Newton–Cotes formulas and first introduced what is today as Eulers formula. He was the first Plumian Professor at Cambridge University from 1707 until his death, Cotes was born in Burbage, Leicestershire. His parents were Robert, the rector of Burbage, and his wife Grace née Farmer, Roger had an elder brother, Anthony and a younger sister, Susanna. At first Roger attended Leicester School where his talent was recognised. His aunt Hannah had married Rev. John Smith, and Smith took on the role of tutor to encourage Rogers talent, the Smiths son, Robert Smith, would become a close associate of Roger Cotes throughout his life. Cotes later studied at St Pauls School in London and entered Trinity College, Cambridge in 1699 and he graduated BA in 1702 and MA in 1706. Roger Cotess contributions to computational methods lie heavily in the fields of astronomy. Cotes began his career with a focus on astronomy. He became a fellow of Trinity College in 1707, and at age 26 he became the first Plumian Professor of Astronomy, on his appointment to professor, he opened a subscription list in an effort to provide an observatory for Trinity. Unfortunately, the still was unfinished when Cotes died, and was demolished in 1797. In correspondence with Isaac Newton, Cotes designed a telescope with a mirror revolving by clockwork. He recomputed the solar and planetary tables of Giovanni Domenico Cassini and John Flamsteed, finally, in 1707 he formed a school of physical sciences at Trinity in partnership with William Whiston. From 1709 to 1713, Cotes became heavily involved with the edition of Newtons Principia. The first edition of Principia had only a few copies printed and was in need of revision to include Newtons works and principles of lunar, Newton at first had a casual approach to the revision, since he had all but given up scientific work. However, through the passion displayed by Cotes, Newtons scientific hunger was once again reignited. The two spent nearly three and half years collaborating on the work, in which they fully deduce, from Newtons laws of motion, the theory of the moon, the equinoxes, only 750 copies of the second edition were printed. However, a copy from Amsterdam met all other demand
Roger Cotes
–
This bust was commissioned by Robert Smith and sculpted posthumously by Peter Scheemakers in 1758.
28.
Adrien-Marie Legendre
–
Adrien-Marie Legendre was a French mathematician. Legendre made numerous contributions to mathematics, well-known and important concepts such as the Legendre polynomials and Legendre transformation are named after him. Adrien-Marie Legendre was born in Paris on 18 September 1752 to a wealthy family and he received his education at the Collège Mazarin in Paris, and defended his thesis in physics and mathematics in 1770. He taught at the École Militaire in Paris from 1775 to 1780, at the same time, he was associated with the Bureau des Longitudes. In 1782, the Berlin Academy awarded Legendre a prize for his treatise on projectiles in resistant media and this treatise also brought him to the attention of Lagrange. The Académie des Sciences made Legendre an adjoint member in 1783, in 1789 he was elected a Fellow of the Royal Society. He assisted with the Anglo-French Survey to calculate the distance between the Paris Observatory and the Royal Greenwich Observatory by means of trigonometry. To this end in 1787 he visited Dover and London together with Dominique, comte de Cassini, the three also visited William Herschel, the discoverer of the planet Uranus. Legendre lost his fortune in 1793 during the French Revolution. That year, he also married Marguerite-Claudine Couhin, who helped him put his affairs in order, in 1795 Legendre became one of six members of the mathematics section of the reconstituted Académie des Sciences, renamed the Institut National des Sciences et des Arts. Later, in 1803, Napoleon reorganized the Institut National, and his pension was partially reinstated with the change in government in 1828. In 1831 he was made an officer of the Légion dHonneur, Legendre died in Paris on 10 January 1833, after a long and painful illness, and Legendres widow carefully preserved his belongings to memorialize him. Upon her death in 1856, she was buried next to her husband in the village of Auteuil, where the couple had lived, Legendres name is one of the 72 names inscribed on the Eiffel Tower. Today, the term least squares method is used as a translation from the French méthode des moindres carrés. Around 1811 he named the gamma function and introduced the symbol Γ normalizing it to Γ = n, in 1830 he gave a proof of Fermats last theorem for exponent n =5, which was also proven by Lejeune Dirichlet in 1828. In number theory, he conjectured the quadratic reciprocity law, subsequently proved by Gauss, in connection to this and he also did pioneering work on the distribution of primes, and on the application of analysis to number theory. His 1798 conjecture of the prime number theorem was proved by Hadamard. He is known for the Legendre transformation, which is used to go from the Lagrangian to the Hamiltonian formulation of classical mechanics, in thermodynamics it is also used to obtain the enthalpy and the Helmholtz and Gibbs energies from the internal energy
Adrien-Marie Legendre
–
1820 watercolor caricature of Adrien-Marie Legendre by French artist Julien-Leopold Boilly (see portrait debacle), the only existing portrait known
Adrien-Marie Legendre
–
1820 watercolor caricatures of the French mathematicians Adrien-Marie Legendre (left) and Joseph Fourier (right) by French artist Julien-Leopold Boilly, watercolor portrait numbers 29 and 30 of Album de 73 portraits-charge aquarellés des membres de I’Institut.
Adrien-Marie Legendre
–
Side view sketching of French politician Louis Legendre (1752–1797), whose portrait has been mistakenly used, for nearly 200 years, to represent French mathematician Adrien-Marie Legendre, i.e. up until 2005 when the mistake was discovered.
29.
Method of least squares
–
The method of least squares is a standard approach in regression analysis to the approximate solution of overdetermined systems, i. e. sets of equations in which there are more equations than unknowns. Least squares means that the overall solution minimizes the sum of the squares of the made in the results of every single equation. The most important application is in data fitting, the best fit in the least-squares sense minimizes the sum of squared residuals. Least squares problems fall into two categories, linear or ordinary least squares and non-linear least squares, depending on whether or not the residuals are linear in all unknowns, the linear least-squares problem occurs in statistical regression analysis, it has a closed-form solution. The non-linear problem is solved by iterative refinement, at each iteration the system is approximated by a linear one. Polynomial least squares describes the variance in a prediction of the dependent variable as a function of the independent variable, when the observations come from an exponential family and mild conditions are satisfied, least-squares estimates and maximum-likelihood estimates are identical. The method of least squares can also be derived as a method of moments estimator, the following discussion is mostly presented in terms of linear functions but the use of least-squares is valid and practical for more general families of functions. Also, by iteratively applying local quadratic approximation to the likelihood, for the topic of approximating a function by a sum of others using an objective function based on squared distances, see least squares. The least-squares method is credited to Carl Friedrich Gauss. The accurate description of the behavior of bodies was the key to enabling ships to sail in open seas. The combination of different observations taken under the same conditions contrary to simply trying ones best to observe, the approach was known as the method of averages. The combination of different observations taken under different conditions, the method came to be known as the method of least absolute deviation. It was notably performed by Roger Joseph Boscovich in his work on the shape of the earth in 1757, the development of a criterion that can be evaluated to determine when the solution with the minimum error has been achieved. Laplace tried to specify a mathematical form of the probability density for the errors and he felt these to be the simplest assumptions he could make, and he had hoped to obtain the arithmetic mean as the best estimate. Instead, his estimator was the posterior median, the first clear and concise exposition of the method of least squares was published by Legendre in 1805. The technique is described as a procedure for fitting linear equations to data. The value of Legendres method of least squares was immediately recognized by leading astronomers, in 1809 Carl Friedrich Gauss published his method of calculating the orbits of celestial bodies. In that work he claimed to have been in possession of the method of least squares since 1795 and this naturally led to a priority dispute with Legendre
Method of least squares
–
Carl Friedrich Gauss
30.
Robert Adrain
–
Robert Adrain was an Irish mathematician, whose career was spent in the USA. He was considered one of the most brilliant mathematical minds of the time in America and he is chiefly remembered for his formulation of the method of least squares. He was born in Carrickfergus, County Antrim, Ireland, but left Ireland after being wounded in the uprising of the United Irishmen in 1798 and moved to Princeton. He taught mathematics at various schools in the United States and he was president of the York County Academy in York, Pennsylvania, from 1801 to 1805. He is chiefly remembered for his formulation of the method of least squares, Adrain certainly did not know of the work of C. F. Gauss on least squares, although it is possible that he had read A. M, Adrain was an editor of and contributor to the Mathematical Correspondent, the first mathematical journal in the United States. He was elected a Fellow of the American Academy of Arts, in 1825 he founded a somewhat more successful publication targeting a wider readership, The Mathematical Diary, which was published through 1832. Adrain was the father of Congressman Garnett B, Robert Adrain died in New Brunswick, New Jersey. He is commemorated by a plaque, unveiled at Carrickfergus by the Ulster History Circle. Attribution This article incorporates text from a now in the public domain, Adrain. Dublin, M. H. Gill & son, research concerning the probabilities of the errors which happen in making observations, &c. Vol. I, Article XIV, pp 93–109, philadelphia, William P. Farrand and Co.1808. Enseignements et éditions, de Robert Adrain à la genèse nationale d’une discipline, », université de Nantes, Centre François Viète. Mathematical statistics in the early States
Robert Adrain
–
Robert Adrain
31.
Carl Friedrich Gauss
–
Johann Carl Friedrich Gauss was born on 30 April 1777 in Brunswick, in the Duchy of Brunswick-Wolfenbüttel, as the son of poor working-class parents. Gauss later solved this puzzle about his birthdate in the context of finding the date of Easter and he was christened and confirmed in a church near the school he attended as a child. A contested story relates that, when he was eight, he figured out how to add up all the numbers from 1 to 100, there are many other anecdotes about his precocity while a toddler, and he made his first ground-breaking mathematical discoveries while still a teenager. He completed Disquisitiones Arithmeticae, his opus, in 1798 at the age of 21. This work was fundamental in consolidating number theory as a discipline and has shaped the field to the present day, while at university, Gauss independently rediscovered several important theorems. Gauss was so pleased by this result that he requested that a regular heptadecagon be inscribed on his tombstone, the stonemason declined, stating that the difficult construction would essentially look like a circle. The year 1796 was most productive for both Gauss and number theory and he discovered a construction of the heptadecagon on 30 March. He further advanced modular arithmetic, greatly simplifying manipulations in number theory, on 8 April he became the first to prove the quadratic reciprocity law. This remarkably general law allows mathematicians to determine the solvability of any quadratic equation in modular arithmetic, the prime number theorem, conjectured on 31 May, gives a good understanding of how the prime numbers are distributed among the integers. Gauss also discovered that every integer is representable as a sum of at most three triangular numbers on 10 July and then jotted down in his diary the note, ΕΥΡΗΚΑ. On October 1 he published a result on the number of solutions of polynomials with coefficients in finite fields, in 1831 Gauss developed a fruitful collaboration with the physics professor Wilhelm Weber, leading to new knowledge in magnetism and the discovery of Kirchhoffs circuit laws in electricity. It was during this time that he formulated his namesake law and they constructed the first electromechanical telegraph in 1833, which connected the observatory with the institute for physics in Göttingen. In 1840, Gauss published his influential Dioptrische Untersuchungen, in which he gave the first systematic analysis on the formation of images under a paraxial approximation. Among his results, Gauss showed that under a paraxial approximation an optical system can be characterized by its cardinal points and he derived the Gaussian lens formula. In 1845, he became associated member of the Royal Institute of the Netherlands, in 1854, Gauss selected the topic for Bernhard Riemanns Habilitationvortrag, Über die Hypothesen, welche der Geometrie zu Grunde liegen. On the way home from Riemanns lecture, Weber reported that Gauss was full of praise, Gauss died in Göttingen, on 23 February 1855 and is interred in the Albani Cemetery there. Two individuals gave eulogies at his funeral, Gausss son-in-law Heinrich Ewald and Wolfgang Sartorius von Waltershausen and his brain was preserved and was studied by Rudolf Wagner who found its mass to be 1,492 grams and the cerebral area equal to 219,588 square millimeters. Highly developed convolutions were also found, which in the early 20th century were suggested as the explanation of his genius, Gauss was a Lutheran Protestant, a member of the St. Albans Evangelical Lutheran church in Göttingen
Carl Friedrich Gauss
–
Carl Friedrich Gauß (1777–1855), painted by Christian Albrecht Jensen
Carl Friedrich Gauss
–
Statue of Gauss at his birthplace, Brunswick
Carl Friedrich Gauss
–
Title page of Gauss's Disquisitiones Arithmeticae
Carl Friedrich Gauss
–
Gauss's portrait published in Astronomische Nachrichten 1828
32.
Augustus De Morgan
–
Augustus De Morgan was a British mathematician and logician. He formulated De Morgans laws and introduced the mathematical induction. Augustus De Morgan was born in Madurai, India in 1806 and his father was Lieut. -Colonel John De Morgan, who held various appointments in the service of the East India Company. His mother, Elizabeth Dodson descended from James Dodson, who computed a table of anti-logarithms, that is, Augustus De Morgan became blind in one eye a month or two after he was born. The family moved to England when Augustus was seven months old, when De Morgan was ten years old, his father died. Mrs. De Morgan resided at various places in the southwest of England and his mathematical talents went unnoticed until he was fourteen, when a family-friend discovered him making an elaborate drawing of a figure in Euclid with ruler and compasses. She explained the aim of Euclid to Augustus, and gave him an initiation into demonstration and he received his secondary education from Mr. Parsons, a fellow of Oriel College, Oxford, who appreciated classics better than mathematics. His mother was an active and ardent member of the Church of England, and desired that her son should become a clergyman, I shall use the world Anti-Deism to signify the opinion that there does not exist a Creator who made and sustains the Universe. His college tutor was John Philips Higman, FRS, at college he played the flute for recreation and was prominent in the musical clubs. His love of knowledge for its own sake interfered with training for the great mathematical race, as a consequence he came out fourth wrangler. This entitled him to the degree of Bachelor of Arts, but to take the degree of Master of Arts. To the signing of any such test De Morgan felt a strong objection, in about 1875 theological tests for academic degrees were abolished in the Universities of Oxford and Cambridge. As no career was open to him at his own university, he decided to go to the Bar, and took up residence in London, about this time the movement for founding London University took shape. A body of liberal-minded men resolved to meet the difficulty by establishing in London a University on the principle of religious neutrality, De Morgan, then 22 years of age, was appointed professor of mathematics. His introductory lecture On the study of mathematics is a discourse upon mental education of permanent value, the London University was a new institution, and the relations of the Council of management, the Senate of professors and the body of students were not well defined. A dispute arose between the professor of anatomy and his students, and in consequence of the action taken by the Council, another professor of mathematics was appointed, who then drowned a few years later. De Morgan had shown himself a prince of teachers, he was invited to return to his chair and its object was to spread scientific and other knowledge by means of cheap and clearly written treatises by the best writers of the time. One of its most voluminous and effective writers was De Morgan, when De Morgan came to reside in London he found a congenial friend in William Frend, notwithstanding his mathematical heresy about negative quantities
Augustus De Morgan
–
Augustus De Morgan (1806-1871)
Augustus De Morgan
–
Augustus De Morgan.
33.
James Whitbread Lee Glaisher
–
James Whitbread Lee Glaisher FRS FRSE FRAS, son of James Glaisher, the meteorologist, was a prolific English mathematician and astronomer. He was born in Lewisham in Kent on 5 November 1848 the son of the eminent astronomer James Glaisher and his wife and his mother was a noted photographer. He was educated at St Pauls School from 1858 and he became somewhat of a school celebrity in 1861 when he made two hot-air balloon ascents with his father to study the stratosphere. He won a Campden Exhibition Scholarship allowing him to study at Trinity College, Cambridge, influential in his time on teaching at the University of Cambridge, he is now remembered mostly for work in number theory that anticipated later interest in the detailed properties of modular forms. He published widely over other fields of mathematics, Glaisher was elected FRS in 1875. He was the editor-in-chief of Messenger of Mathematics and he was also the tutor of the philosopher Ludwig Wittgenstein. He was president of the Royal Astronomical Society 1886–1888 and 1901–1903, when George Biddell Airy retired as Astronomer Royal in 1881 it is said that Glaisher was offered the post but declined. He did not marry and lived on campus at Cambridge University and he died in his lodgings there on 7 December 1928. He was a keen cyclist but preferred his penny-farthing to the safety bicycles. He was President of Cambridge University Cycling Club 1882 to 1885 and he was a keen collector of Delftware and the university indulged him by allowing him a room of the Fitzwilliam Museum to house his personal collection
James Whitbread Lee Glaisher
–
James Whitbread Lee Glaisher.
34.
Richard Dedekind
–
Julius Wilhelm Richard Dedekind was a German mathematician who made important contributions to abstract algebra, algebraic number theory and the definition of the real numbers. Dedekinds father was Julius Levin Ulrich Dedekind, an administrator of Collegium Carolinum in Braunschweig, as an adult, he never used the names Julius Wilhelm. He was born, lived most of his life, and died in Braunschweig and he first attended the Collegium Carolinum in 1848 before transferring to the University of Göttingen in 1850. There, Dedekind was taught number theory by professor Moritz Stern, Gauss was still teaching, although mostly at an elementary level, and Dedekind became his last student. Dedekind received his doctorate in 1852, for a thesis titled Über die Theorie der Eulerschen Integrale and this thesis did not display the talent evident by Dedekinds subsequent publications. At that time, the University of Berlin, not Göttingen, was the facility for mathematical research in Germany. Thus Dedekind went to Berlin for two years of study, where he and Bernhard Riemann were contemporaries, they were awarded the habilitation in 1854. Dedekind returned to Göttingen to teach as a Privatdozent, giving courses on probability and he studied for a while with Peter Gustav Lejeune Dirichlet, and they became good friends. Because of lingering weaknesses in his knowledge, he studied elliptic. Yet he was also the first at Göttingen to lecture concerning Galois theory, about this time, he became one of the first people to understand the importance of the notion of groups for algebra and arithmetic. In 1858, he began teaching at the Polytechnic school in Zürich, when the Collegium Carolinum was upgraded to a Technische Hochschule in 1862, Dedekind returned to his native Braunschweig, where he spent the rest of his life, teaching at the Institute. He retired in 1894, but did occasional teaching and continued to publish and he never married, instead living with his sister Julia. Dedekind was elected to the Academies of Berlin and Rome, and he received honorary doctorates from the universities of Oslo, Zurich, and Braunschweig. While teaching calculus for the first time at the Polytechnic school, Dedekind developed the now known as a Dedekind cut. The idea of a cut is that an irrational number divides the rational numbers into two classes, with all the numbers of one class being strictly greater than all the numbers of the other class. Every location on the number line continuum contains either a rational or an irrational number, thus there are no empty locations, gaps, or discontinuities. Dedekind published his thoughts on irrational numbers and Dedekind cuts in his pamphlet Stetigkeit und irrationale Zahlen, in modern terminology, Vollständigkeit, Dedekinds theorem states that if there existed a one-to-one correspondence between two sets, then Dedekind said that the two sets were similar. Thus the set N of natural numbers can be shown to be similar to the subset of N whose members are the squares of every member of N, N12345678910
Richard Dedekind
–
Richard Dedekind
Richard Dedekind
–
East German stamp from 1981, commemorating Richard Dedekind.
35.
Andrey Kolmogorov
–
Andrey Kolmogorov was born in Tambov, about 500 kilometers south-southeast of Moscow, in 1903. Kolmogorova, died giving birth to him, Andrey was raised by two of his aunts in Tunoshna at the estate of his grandfather, a well-to-do nobleman. Little is known about Andreys father and he was supposedly named Nikolai Matveevich Kataev and had been an agronomist. Nikolai had been exiled from St. Petersburg to the Yaroslavl province after his participation in the movement against the czars. He disappeared in 1919 and he was presumed to have killed in the Russian Civil War. Andrey Kolmogorov was educated in his aunt Veras village school, and his earliest literary efforts, Andrey was the editor of the mathematical section of this journal. In 1910, his aunt adopted him, and they moved to Moscow, later that same year, Kolmogorov began to study at the Moscow State University and at the same time Mendeleev Moscow Institute of Chemistry and Technology. Kolmogorov writes about this time, I arrived at Moscow University with a knowledge of mathematics. I knew in particular the beginning of set theory, I studied many questions in articles in the Encyclopedia of Brockhaus and Efron, filling out for myself what was presented too concisely in these articles. Kolmogorov gained a reputation for his wide-ranging erudition, during the same period, Kolmogorov worked out and proved several results in set theory and in the theory of Fourier series. In 1922, Kolmogorov gained international recognition for constructing a Fourier series that diverges almost everywhere, around this time, he decided to devote his life to mathematics. In 1925, Kolmogorov graduated from the Moscow State University and began to study under the supervision of Nikolai Luzin, Kolmogorov became interested in probability theory. In 1929, Kolmogorov earned his Doctor of Philosophy degree, from Moscow State University, in 1930, Kolmogorov went on his first long trip abroad, traveling to Göttingen and Munich, and then to Paris. He had various contacts in Göttingen. His pioneering work, About the Analytical Methods of Probability Theory, was published in 1931, also in 1931, he became a professor at the Moscow State University. In 1935, Kolmogorov became the first chairman of the department of probability theory at the Moscow State University, around the same years Kolmogorov contributed to the field of ecology and generalized the Lotka–Volterra model of predator-prey systems. In 1936, Kolmogorov and Alexandrov were involved in the persecution of their common teacher Nikolai Luzin, in the so-called Luzin affair. In a 1938 paper, Kolmogorov established the basic theorems for smoothing and predicting stationary stochastic processes—a paper that had military applications during the Cold War
Andrey Kolmogorov
–
Andrey Kolmogorov
Andrey Kolmogorov
–
Kolmogorov (left) delivers a talk at a Soviet information theory symposium. (Tallinn, 1973).
Andrey Kolmogorov
–
Kolmogorov works on his talk (Tallinn, 1973).
36.
Artemas Martin
–
Artemas Martin was a self-educated American mathematician. Martin was born on August 3,1835 in Steuben County, New York and grew up in Venango County and he worked as a farmer, oil driller, and schoolteacher. In 1881, he declined an invitation to become a professor of mathematics at the Normal School in Missouri, in 1885, he became the librarian for the Survey Office of the United States Coast Guard, and in 1898 he became a computer in the Division of Tides. He died on November 7,1918, from 1870 to 1875, he was editor of the Stairway Department of Clarks School Visitor, one of the magazines to which he had previously contributed. From 1875 to 1876 Martin moved to the Normal Monthly, where he published 16 articles on diophantine analysis and he subsequently became editor of the Mathematical Visitor in 1877 and of the Mathematical Magazine in 1882. In 1983 in Chicago, his paper On fifth-power numbers whose sum is a power was read at the International Mathematical Congress held in connection with the Worlds Columbian Exposition. Martin maintained an extensive library, now in the collections of American University. In 1877 Martin was given an honorary M. A. from Yale University. In 1882 he was awarded honorary degree, a Ph. D. from Rutgers University, and his third honorary degree. He was also a member of the American Mathematical Society, the Circolo Matematico di Palermo, the Mathematical Association of England, and the Deutsche Mathematiker-Vereinigung
Artemas Martin
–
Artemas Martin (US Naval Observatory)
37.
Market (economics)
–
A market is one of the many varieties of systems, institutions, procedures, social relations and infrastructures whereby parties engage in exchange. While parties may exchange goods and services by barter, most markets rely on sellers offering their goods or services in exchange for money from buyers and it can be said that a market is the process by which the prices of goods and services are established. Markets facilitate trade and enable the distribution and allocation of resources in a society, Markets allow any trade-able item to be evaluated and priced. A market emerges more or less spontaneously or may be constructed deliberately by human interaction in order to enable the exchange of rights of services, Markets can also be worldwide, for example the global diamond trade. National economies can be classified, for example as developed markets or developing markets, in mainstream economics, the concept of a market is any structure that allows buyers and sellers to exchange any type of goods, services and information. The exchange of goods or services, with or without money, is a transaction, a major topic of debate is how much a given market can be considered to be a free market, that is free from government intervention. However it is not always clear how the allocation of resources can be improved since there is always the possibility of government failure, a market is one of the many varieties of systems, institutions, procedures, social relations and infrastructures whereby parties engage in exchange. While parties may exchange goods and services by barter, most markets rely on sellers offering their goods or services in exchange for money from buyers and it can be said that a market is the process by which the prices of goods and services are established. Markets facilitate trade and enables the distribution and allocation of resources in a society, Markets allow any trade-able item to be evaluated and priced. A market sometimes emerges more or less spontaneously but is often constructed deliberately by human interaction in order to enable the exchange of rights of services. Markets of varying types can spontaneously arise whenever a party has interest in a good or service that other party can provide. Hence there can be a market for cigarettes in correctional facilities, another for chewing gum in a playground, and yet another for contracts for the future delivery of a commodity. Markets vary in form, scale, location, and types of participants, as well as the types of goods and services traded, nevertheless, violence and they apply the market dynamics to facilitate information aggregation. However, market prices may be distorted by a seller or sellers with monopoly power, such price distortions can have an adverse effect on market participants welfare and reduce the efficiency of market outcomes. Also, the level of organization and negotiating power of buyers and sellers markedly affects the functioning of the market. Markets are a system, and systems have structure, the structure of a well-functioning market is defined by the theory of perfect competition. Market failures are often associated with time-inconsistent preferences, information asymmetries, non-perfectly competitive markets, principal–agent problems, externalities, among the major negative externalities which can occur as a side effect of production and market exchange, are air pollution and environmental degradation. There exists a popular thought, especially among economists, that markets would have a structure of a perfect competition
Market (economics)
–
Financial markets
Market (economics)
–
Corn Exchange, in London circa 1809.
Market (economics)
–
A market in Râmnicu Vâlcea by Amedeo Preziosi.
Market (economics)
–
Cabbage market by Václav Malý.
38.
Behavioral finance
–
Risk tolerance is a crucial factor in personal financial decision making. Risk tolerance is defined as individuals willingness to engage in a financial activity whose outcome is uncertain, Behavioral economics is primarily concerned with the bounds of rationality of economic agents. Behavioral models typically integrate insights from psychology, neuroscience and microeconomic theory, in so doing, these behavioral models cover a range of concepts, methods, the study of behavioral economics includes how market decisions are made and the mechanisms that drive public choice. The use of the term behavioral economics in U. S. scholarly papers has increased in the past few years, there are three prevalent themes in behavioral finances, Heuristics, People often make decisions based on approximate rules of thumb and not strict logic. Framing, The collection of anecdotes and stereotypes that make up the mental emotional filters individuals rely on to understand and respond to events, Market inefficiencies, These include mis-pricings and non-rational decision making. During the classical period of economics, microeconomics was closely linked to psychology and they developed the concept of homo economicus, whose psychology was fundamentally rational. However, many important neo-classical economists employed more sophisticated psychological explanations, including Francis Edgeworth, Vilfredo Pareto, Economic psychology emerged in the 20th century in the works of Gabriel Tarde, George Katona, and Laszlo Garai. Expected utility and discounted utility models began to gain acceptance, generating testable hypotheses about decision-making given uncertainty and intertemporal consumption, in the 1960s cognitive psychology began to shed more light on the brain as an information processing device. In mathematical psychology, there is a longstanding interest in the transitivity of preference, prospect theory has two stages, an editing stage and an evaluation stage. In the editing stage, risky situations are simplified using various heuristics of choice, outcomes are then compared to the reference point and classified as gains if greater than the reference point and losses if less than the reference point. Loss aversion, Losses bite more than equivalent gains, in their 1979 paper published in Econometrica, Kahneman and Tversky found the median coefficient of loss aversion to be about 2.25, i. e. losses bite about 2.25 times more than equivalent gains. Prospect theory is able to explain everything that the two main existing decision theories—expected utility theory and rank dependent utility theory—can explain, prospect theory has been used to explain a range of phenomena that existing decision theories have great difficulty in explaining. These include backward bending labor supply curves, asymmetric price elasticities, tax evasion, co-movement of stock prices and consumption, in 1992, in the Journal of Risk and Uncertainty, Kahneman and Tversky gave their revised account of prospect theory that they called cumulative prospect theory. The new theory eliminated the editing phase in prospect theory and focused just on the evaluation phase and its main feature was that it allowed for non-linear probability weighting in a cumulative manner, which was originally suggested in John Quiggins rank dependent utility theory. Psychological traits such as overconfidence, projection bias, and the effects of limited attention are now part of the theory, Behavioral economics has also been applied to intertemporal choice. Intertemporal choice is defined as making a decision and having the effects of such decision happening in a different time, hyperbolic discounting describes the tendency to discount outcomes in the near future more than for outcomes in the far future. Other branches of behavioral economics enrich the model of the utility function without implying inconsistency in preferences, ernst Fehr, Armin Falk, and Matthew Rabin studied fairness, inequity aversion, and reciprocal altruism, weakening the neoclassical assumption of perfect selfishness. This work is particularly applicable to wage setting, Behavioral economics caught on among the general public with the success of books such as Dan Arielys Predictably Irrational
Behavioral finance
–
Daniel Kahneman, winner of 2002 Nobel prize in economics
Behavioral finance
–
World GDP (PPP) per capita by country (2014)
39.
Reliability (statistics)
–
Reliability in statistics and psychometrics is the overall consistency of a measure. A measure is said to have a high reliability if it produces similar results under consistent conditions and it is the characteristic of a set of test scores that relates to the amount of random error from the measurement process that might be embedded in the scores. Scores that are reliable are accurate, reproducible, and consistent from one testing occasion to another. That is, if the process were repeated with a group of test takers. Various kinds of reliability coefficients, with values ranging between 0.00 and 1.00, are used to indicate the amount of error in the scores. For example, measurements of height and weight are often extremely reliable. There are several classes of reliability estimates, Inter-rater reliability assesses the degree of agreement between two or more raters in their appraisals. Test-retest reliability assesses the degree to which test scores are consistent from one test administration to the next, measurements are gathered from a single rater who uses the same methods or instruments and the same testing conditions. Inter-method reliability assesses the degree to which test scores are consistent when there is a variation in the methods or instruments used and this allows inter-rater reliability to be ruled out. When dealing with forms, it may be termed parallel-forms reliability, internal consistency reliability, assesses the consistency of results across items within a test. That is, a measure that is measuring something consistently is not necessarily measuring what you want to be measuring. For example, while there are many tests of specific abilities, not all of them would be valid for predicting, say. While reliability does not imply validity, reliability does place a limit on the validity of a test. A test that is not perfectly reliable cannot be perfectly valid, while a reliable test may provide useful valid information, a test that is not reliable cannot possibly be valid. For example, if a set of weighing scales consistently measured the weight of an object as 500 grams over the weight, then the scale would be very reliable. For the scale to be valid, it should return the weight of an object. This example demonstrates that a reliable measure is not necessarily valid. In practice, testing measures are never perfectly consistent, theories of test reliability have been developed to estimate the effects of inconsistency on the accuracy of measurement
Reliability (statistics)
–
Validity & Reliability
40.
Natural language processing
–
The history of NLP generally starts in the 1950s, although work can be found from earlier periods. In 1950, Alan Turing published an article titled Computing Machinery, the Georgetown experiment in 1954 involved fully automatic translation of more than sixty Russian sentences into English. The authors claimed that three or five years, machine translation would be a solved problem. Little further research in translation was conducted until the late 1980s. Using almost no information about human thought or emotion, ELIZA sometimes provided a startlingly human-like interaction, when the patient exceeded the very small knowledge base, ELIZA might provide a generic response, for example, responding to My head hurts with Why do you say your head hurts. During the 1970s many programmers began to write conceptual ontologies, which structured real-world information into computer-understandable data, examples are MARGIE, SAM, PAM, TaleSpin, QUALM, Politics, and Plot Units. During this time, many chatterbots were written including PARRY, Racter, up to the 1980s, most NLP systems were based on complex sets of hand-written rules. Starting in the late 1980s, however, there was a revolution in NLP with the introduction of machine learning algorithms for language processing, some of the earliest-used machine learning algorithms, such as decision trees, produced systems of hard if-then rules similar to existing hand-written rules. The cache language models upon which many speech recognition systems now rely are examples of statistical models. Many of the early successes occurred in the field of machine translation, due especially to work at IBM Research. However, most other systems depended on corpora specifically developed for the tasks implemented by these systems, as a result, a great deal of research has gone into methods of more effectively learning from limited amounts of data. Recent research has focused on unsupervised and semi-supervised learning algorithms. Such algorithms are able to learn from data that has not been hand-annotated with the desired answers, generally, this task is much more difficult than supervised learning, and typically produces less accurate results for a given amount of input data. However, there is an amount of non-annotated data available. Since the so-called statistical revolution in the late 1980s and mid 1990s, formerly, many language-processing tasks typically involved the direct hand coding of rules, which is not in general robust to natural language variation. The machine-learning paradigm calls instead for using statistical inference to automatically learn such rules through the analysis of large corpora of typical real-world examples, Many different classes of machine learning algorithms have been applied to NLP tasks. These algorithms take as input a set of features that are generated from the input data. Some of the algorithms, such as decision trees, produced systems of hard if-then rules similar to the systems of hand-written rules that were then common
Natural language processing
–
An automated online assistant providing customer service on a web page, an example of an application where natural language processing is a major component.
41.
Power set
–
In mathematics, the power set of any set S is the set of all subsets of S, including the empty set and S itself. The power set of a set S is variously denoted as P, ℘, P, ℙ, or, in axiomatic set theory, the existence of the power set of any set is postulated by the axiom of power set. Any subset of P is called a family of sets over S, if S is the set, then the subsets of S are, and hence the power set of S is. If S is a set with |S| = n elements. This fact, which is the motivation for the notation 2S, may be demonstrated simply as follows, First and we write any subset of S in the format where γi,1 ≤ i ≤ n, can take the value of 0 or 1. If γi =1, the element of S is in the subset, otherwise. Clearly the number of subsets that can be constructed this way is 2n as γi ∈. Cantors diagonal argument shows that the set of a set always has strictly higher cardinality than the set itself. In particular, Cantors theorem shows that the set of a countably infinite set is uncountably infinite. The power set of the set of numbers can be put in a one-to-one correspondence with the set of real numbers. The power set of a set S, together with the operations of union, intersection, in fact, one can show that any finite Boolean algebra is isomorphic to the Boolean algebra of the power set of a finite set. For infinite Boolean algebras this is no true, but every infinite Boolean algebra can be represented as a subalgebra of a power set Boolean algebra. The power set of a set S forms a group when considered with the operation of symmetric difference. It can hence be shown that the power set considered together with both of these forms a Boolean ring. In set theory, XY is the set of all functions from Y to X, as 2 can be defined as, 2S is the set of all functions from S to. Hence 2S and P could be considered identical set-theoretically and this notion can be applied to the example above in which S = to see the isomorphism with the binary numbers from 0 to 2n −1 with n being the number of elements in the set. In S, a 1 in the corresponding to the location in the set indicates the presence of the element. The number of subsets with k elements in the set of a set with n elements is given by the number of combinations, C
Power set
–
The elements of the power set of the set { x, y, z } ordered in respect to inclusion.
42.
Function (mathematics)
–
In mathematics, a function is a relation between a set of inputs and a set of permissible outputs with the property that each input is related to exactly one output. An example is the function that each real number x to its square x2. The output of a function f corresponding to a x is denoted by f. In this example, if the input is −3, then the output is 9, likewise, if the input is 3, then the output is also 9, and we may write f =9. The input variable are sometimes referred to as the argument of the function, Functions of various kinds are the central objects of investigation in most fields of modern mathematics. There are many ways to describe or represent a function, some functions may be defined by a formula or algorithm that tells how to compute the output for a given input. Others are given by a picture, called the graph of the function, in science, functions are sometimes defined by a table that gives the outputs for selected inputs. A function could be described implicitly, for example as the inverse to another function or as a solution of a differential equation, sometimes the codomain is called the functions range, but more commonly the word range is used to mean, instead, specifically the set of outputs. For example, we could define a function using the rule f = x2 by saying that the domain and codomain are the numbers. The image of this function is the set of real numbers. In analogy with arithmetic, it is possible to define addition, subtraction, multiplication, another important operation defined on functions is function composition, where the output from one function becomes the input to another function. Linking each shape to its color is a function from X to Y, each shape is linked to a color, there is no shape that lacks a color and no shape that has more than one color. This function will be referred to as the color-of-the-shape function, the input to a function is called the argument and the output is called the value. The set of all permitted inputs to a function is called the domain of the function. Thus, the domain of the function is the set of the four shapes. The concept of a function does not require that every possible output is the value of some argument, a second example of a function is the following, the domain is chosen to be the set of natural numbers, and the codomain is the set of integers. The function associates to any number n the number 4−n. For example, to 1 it associates 3 and to 10 it associates −6, a third example of a function has the set of polygons as domain and the set of natural numbers as codomain
Function (mathematics)
–
A function f takes an input x, and returns a single output f (x). One metaphor describes the function as a "machine" or " black box " that for each input returns a corresponding output.
43.
Joint distribution
–
In the case of only two random variables, this is called a bivariate distribution, but the concept generalizes to any number of random variables, giving a multivariate distribution. The joint probability distribution can be expressed either in terms of a joint cumulative distribution function or in terms of a joint probability density function or joint probability mass function. Consider the flip of two coins, let A and B be discrete random variables associated with the outcomes first. If a coin displays heads then associated random variable is 1, the joint probability density function of A and B defines probabilities for each pair of outcomes. All possible outcomes are, Since each outcome is likely the joint probability density function becomes P =1 /4 when A, B ∈. Since the coin flips are independent, the joint probability density function is the product of the marginals, in general, each coin flip is a Bernoulli trial and the sequence of flips follows a Bernoulli distribution. Consider the roll of a dice and let A =1 if the number is even. Furthermore, let B =1 if the number is prime and B =0 otherwise. Then, the joint distribution of A and B, expressed as a probability function, is P = P =16, P = P =26, P = P =26, P = P =16. These probabilities necessarily sum to 1, since the probability of some combination of A and B occurring is 1. The joint probability function of two discrete random variables X, Y is, P = P ⋅ P = P ⋅ P. Again, since these are probability distributions, one has ∫ x ∫ y f X, Y d y d x =1, formally, fX, Y is the probability density function of with respect to the product measure on the respective supports of X and Y. Two discrete random variables X and Y are independent if the joint probability mass function satisfies P = P ⋅ P for all x and y. Similarly, two absolutely continuous random variables are independent if f X, Y = f X ⋅ f Y for all x and y, such conditional independence relations can be represented with a Bayesian network or copula functions
Joint distribution
–
Many sample observations (black) are shown from a joint probability distribution. The marginal densities are shown as well.
44.
Dice
–
Dice are small throwable objects with multiple resting positions, used for generating random numbers. Dice are suitable as gambling devices for games like craps and are used in non-gambling tabletop games. A traditional die is a cube, with each of its six faces showing a different number of dots from 1 to 6. When thrown or rolled, the die comes to rest showing on its surface a random integer from one to six. A variety of devices are also described as dice, such specialized dice may have polyhedral or irregular shapes. They may be used to produce other than one through six. Loaded and crooked dice are designed to favor some results over others for purposes of cheating or amusement. A dice tray, a used to contain thrown dice, is sometimes used for gambling or board games. Dice have been used since before recorded history, and it is uncertain where they originated, the oldest known dice were excavated as part of a backgammon-like game set at the Burnt City, an archeological site in south-eastern Iran, estimated to be from between 2800–2500 BCE. Other excavations from ancient tombs in the Indus Valley civilization indicate a South Asian origin, the Egyptian game of Senet was played with dice. Senet was played before 3000 BC and up to the 2nd century AD and it was likely a racing game, but there is no scholarly consensus on the rules of Senet. Dicing is mentioned as an Indian game in the Rigveda, Atharvaveda, there are several biblical references to casting lots, as in Psalm 22, indicating that dicing was commonplace when the psalm was composed. Knucklebones was a game of skill played by women and children, although gambling was illegal, many Romans were passionate gamblers who enjoyed dicing, which was known as aleam ludere. Dicing was even a popular pastime of emperors, letters by Augustus to Tacitus and his daughter recount his hobby of dicing. There were two sizes of Roman dice, tali were large dice inscribed with one, three, four, and six on four sides. Tesserae were smaller dice with sides numbered one to six. Twenty-sided dice date back to the 2nd century AD and from Ptolemaic Egypt as early as the 2nd century BC, dominoes and playing cards originated in China as developments from dice. The transition from dice to playing cards occurred in China around the Tang dynasty, in Japan, dice were used to play a popular game called sugoroku
Dice
–
Four differently colored traditional dice showing all six different sides
Dice
–
The Royal Game of Ur, a Mesopotamian board game played with dice
Dice
–
Bone die found at Cantonment Clinch (1823–1834), an American fort used in the American Civil War by both Confederate and Union forces at separate times
Dice
–
A collection of historical dice from various regions of Asia
45.
Inverse probability
–
In probability theory, inverse probability is an obsolete term for the probability distribution of an unobserved variable. The development of the field and terminology from inverse probability to Bayesian probability is described by Fienberg, the term inverse probability appears in an 1837 paper of De Morgan, in reference to Laplaces method of probability, though the term inverse probability does not occur in these. Later Jeffreys uses the term in his defense of the methods of Bayes and Laplace, the term Bayesian, which displaced inverse probability, was introduced by Ronald Fisher around 1950. Inverse probability, variously interpreted, was the dominant approach to statistics until the development of frequentism in the early 20th century by Ronald Fisher, Jerzy Neyman and Egon Pearson. Following the development of frequentism, the terms frequentist and Bayesian developed to contrast these approaches, the distribution p itself is called the direct probability. The inverse probability problem was the problem of estimating a parameter from experimental data in the sciences, especially astronomy. A simple example would be the problem of estimating the position of a star in the sky for purposes of navigation, given the data, one must estimate the true position. This problem would now be considered one of inferential statistics, the terms direct probability and inverse probability were in use until the middle part of the 20th century, when the terms likelihood function and posterior distribution became prevalent
Inverse probability
–
Ronald Fisher
46.
Newtonian mechanics
–
In physics, classical mechanics is one of the two major sub-fields of mechanics, along with quantum mechanics. Classical mechanics is concerned with the set of physical laws describing the motion of bodies under the influence of a system of forces. The study of the motion of bodies is an ancient one, making classical mechanics one of the oldest and largest subjects in science, engineering and technology. Classical mechanics describes the motion of objects, from projectiles to parts of machinery, as well as astronomical objects, such as spacecraft, planets, stars. Within classical mechanics are fields of study that describe the behavior of solids, liquids and gases, Classical mechanics also provides extremely accurate results as long as the domain of study is restricted to large objects and the speeds involved do not approach the speed of light. When both quantum and classical mechanics cannot apply, such as at the level with high speeds. Since these aspects of physics were developed long before the emergence of quantum physics and relativity, however, a number of modern sources do include relativistic mechanics, which in their view represents classical mechanics in its most developed and accurate form. Later, more abstract and general methods were developed, leading to reformulations of classical mechanics known as Lagrangian mechanics and these advances were largely made in the 18th and 19th centuries, and they extend substantially beyond Newtons work, particularly through their use of analytical mechanics. The following introduces the concepts of classical mechanics. For simplicity, it often models real-world objects as point particles, the motion of a point particle is characterized by a small number of parameters, its position, mass, and the forces applied to it. Each of these parameters is discussed in turn, in reality, the kind of objects that classical mechanics can describe always have a non-zero size. Objects with non-zero size have more complicated behavior than hypothetical point particles, because of the degrees of freedom. However, the results for point particles can be used to such objects by treating them as composite objects. The center of mass of a composite object behaves like a point particle, Classical mechanics uses common-sense notions of how matter and forces exist and interact. It assumes that matter and energy have definite, knowable attributes such as where an object is in space, non-relativistic mechanics also assumes that forces act instantaneously. The position of a point particle is defined with respect to a fixed reference point in space called the origin O, in space. A simple coordinate system might describe the position of a point P by means of a designated as r. In general, the point particle need not be stationary relative to O, such that r is a function of t, the time
Newtonian mechanics
–
Sir Isaac Newton (1643–1727), an influential figure in the history of physics and whose three laws of motion form the basis of classical mechanics
Newtonian mechanics
–
Diagram of orbital motion of a satellite around the earth, showing perpendicular velocity and acceleration (force) vectors.
Newtonian mechanics
–
Hamilton 's greatest contribution is perhaps the reformulation of Newtonian mechanics, now called Hamiltonian mechanics.
47.
Chaos theory
–
Chaos theory is a branch of mathematics focused on the behavior of dynamical systems that are highly sensitive to initial conditions. This happens even though these systems are deterministic, meaning that their behavior is fully determined by their initial conditions. In other words, the nature of these systems does not make them predictable. This behavior is known as chaos, or simply chaos. The theory was summarized by Edward Lorenz as, Chaos, When the present determines the future, Chaotic behavior exists in many natural systems, such as weather and climate. It also occurs spontaneously in some systems with components, such as road traffic. This behavior can be studied through analysis of a mathematical model, or through analytical techniques such as recurrence plots. Chaos theory has applications in several disciplines, including meteorology, sociology, physics, environmental science, computer science, engineering, economics, biology, ecology, the theory formed the basis for such fields of study as complex dynamical systems, edge of chaos theory, self-assembly process. Chaos theory concerns deterministic systems whose behavior can in principle be predicted, Chaotic systems are predictable for a while and then appear to become random. Some examples of Lyapunov times are, chaotic electrical circuits, about 1 millisecond, weather systems, a few days, in chaotic systems, the uncertainty in a forecast increases exponentially with elapsed time. Hence, mathematically, doubling the forecast time more than squares the proportional uncertainty in the forecast and this means, in practice, a meaningful prediction cannot be made over an interval of more than two or three times the Lyapunov time. When meaningful predictions cannot be made, the system appears random, in common usage, chaos means a state of disorder. However, in theory, the term is defined more precisely. Although no universally accepted mathematical definition of chaos exists, a commonly used definition originally formulated by Robert L, in these cases, while it is often the most practically significant property, sensitivity to initial conditions need not be stated in the definition. If attention is restricted to intervals, the second property implies the other two, an alternative, and in general weaker, definition of chaos uses only the first two properties in the above list. Sensitivity to initial conditions means that each point in a system is arbitrarily closely approximated by other points with significantly different future paths. Thus, a small change, or perturbation, of the current trajectory may lead to significantly different future behavior. C. Entitled Predictability, Does the Flap of a Butterflys Wings in Brazil set off a Tornado in Texas, the flapping wing represents a small change in the initial condition of the system, which causes a chain of events leading to large-scale phenomena
Chaos theory
–
The Lorenz attractor displays chaotic behavior. These two plots demonstrate sensitive dependence on initial conditions within the region of phase space occupied by the attractor.
Chaos theory
–
A plot of Lorenz attractor for values r = 28, σ = 10, b = 8/3
Chaos theory
–
Turbulence in the tip vortex from an airplane wing. Studies of the critical point beyond which a system creates turbulence were important for chaos theory, analyzed for example by the Soviet physicist Lev Landau, who developed the Landau-Hopf theory of turbulence. David Ruelle and Floris Takens later predicted, against Landau, that fluid turbulence could develop through a strange attractor, a main concept of chaos theory.
Chaos theory
–
A conus textile shell, similar in appearance to Rule 30, a cellular automaton with chaotic behaviour.
48.
Roulette
–
Roulette is a casino game named after the French word meaning little wheel. The ball eventually loses momentum and falls onto the wheel and into one of 37 or 38 colored and numbered pockets on the wheel The first form of roulette was devised in 18th century France. A century earlier, Blaise Pascal introduced a form of roulette in the 17th century in his search for a perpetual motion machine. The game has played in its present form since as early as 1796 in Paris. The description included the house pockets, There are exactly two slots reserved for the bank, whence it derives its sole mathematical advantage and it then goes on to describe the layout with. two betting spaces containing the banks two numbers, zero and double zero. The book was published in 1801, an even earlier reference to a game of this name was published in regulations for New France in 1758, which banned the games of dice, hoca, faro, and roulette. The roulette wheels used in the casinos of Paris in the late 1790s had red for the single zero, to avoid confusion, the color green was selected for the zeros in roulette wheels starting in the 1800s. In some forms of early American roulette wheels - as shown in the 1886 Hoyle gambling books, there were numbers 1 through 28, plus a single zero, a zero. The Eagle slot, which was a symbol of American liberty, was a slot that brought the casino extra edge. Soon, the tradition vanished and since then the features only numbered slots. Existing wheels with Eagle symbols are rare, with fewer than a half-dozen copies known to exist. Authentic Eagled wheels in excellent condition can fetch tens of thousands of dollars at auction, in the 19th century, roulette spread all over Europe and the US, becoming one of the most famous and most popular casino games. A legend says that François Blanc supposedly bargained with the devil to obtain the secrets of roulette, the legend is based on the fact that the sum of all the numbers on the roulette wheel is 666, which is the Number of the Beast. In the United States, the French double zero wheel made its way up the Mississippi from New Orleans and this eventually evolved into the American style roulette game as different from the traditional French game. The American game developed in the gambling dens across the new territories where makeshift games had been set up, whereas the French game evolved with style and leisure in Monte Carlo. However, it is the American style layout with its simplified betting and fast cash action, using either a single or double zero wheel, that now dominates in most casinos around the world. During the first part of the 20th century, the only towns of note were Monte Carlo with the traditional single zero French wheel. In the 1970s, casinos began to flourish around the world, by 2008 there were several hundred casinos worldwide offering roulette games
Roulette
–
Roulette ball
Roulette
–
French roulette
Roulette
–
"Gwendolen at the roulette table" - 1910 illustration to George Eliot ' " Daniel Deronda ".
Roulette
–
18th century E.O. wheel with gamblers
49.
Wave function
–
A wave function in quantum physics is a description of the quantum state of a system. The wave function is a probability amplitude, and the probabilities for the possible results of measurements made on the system can be derived from it. The most common symbols for a function are the Greek letters ψ or Ψ. The wave function is a function of the degrees of freedom corresponding to some set of commuting observables. Once such a representation is chosen, the function can be derived from the quantum state. For a given system, the choice of which commuting degrees of freedom to use is not unique, some particles, like electrons and photons, have nonzero spin, and the wave function for such particles includes spin as an intrinsic, discrete degree of freedom. Other discrete variables can also be included, such as isospin and these values are often displayed in a column matrix. According to the principle of quantum mechanics, wave functions can be added together and multiplied by complex numbers to form new wave functions. The Schrödinger equation determines how wave functions evolve over time, a wave function behaves qualitatively like other waves, such as water waves or waves on a string, because the Schrödinger equation is mathematically a type of wave equation. This explains the name wave function, and gives rise to wave–particle duality, however, the wave function in quantum mechanics describes a kind of physical phenomenon, still open to different interpretations, which fundamentally differs from that of classic mechanical waves. The integral of this quantity, over all the degrees of freedom. This general requirement a wave function must satisfy is called the normalization condition, since the wave function is complex valued, only its relative phase and relative magnitude can be measured. In 1905 Einstein postulated the proportionality between the frequency of a photon and its energy, E = hf, and in 1916 the corresponding relation between photon momentum and wavelength, λ = h/p, the equations represent wave–particle duality for both massless and massive particles. In the 1920s and 1930s, quantum mechanics was developed using calculus and those who used the techniques of calculus included Louis de Broglie, Erwin Schrödinger, and others, developing wave mechanics. Those who applied the methods of linear algebra included Werner Heisenberg, Max Born, Schrödinger subsequently showed that the two approaches were equivalent. However, no one was clear on how to interpret it, at first, Schrödinger and others thought that wave functions represent particles that are spread out with most of the particle being where the wave function is large. This was shown to be incompatible with the scattering of a wave packet representing a particle off a target. While a scattered particle may scatter in any direction, it not break up
Wave function
–
The electron probability density for the first few hydrogen atom electron orbitals shown as cross-sections. These orbitals form an orthonormal basis for the wave function of the electron. Different orbitals are depicted with different scale.
50.
Albert Einstein
–
Albert Einstein was a German-born theoretical physicist. He developed the theory of relativity, one of the two pillars of modern physics, Einsteins work is also known for its influence on the philosophy of science. Einstein is best known in popular culture for his mass–energy equivalence formula E = mc2, near the beginning of his career, Einstein thought that Newtonian mechanics was no longer enough to reconcile the laws of classical mechanics with the laws of the electromagnetic field. This led him to develop his theory of relativity during his time at the Swiss Patent Office in Bern. Briefly before, he aquired the Swiss citizenship in 1901, which he kept for his whole life and he continued to deal with problems of statistical mechanics and quantum theory, which led to his explanations of particle theory and the motion of molecules. He also investigated the properties of light which laid the foundation of the photon theory of light. In 1917, Einstein applied the theory of relativity to model the large-scale structure of the universe. He was visiting the United States when Adolf Hitler came to power in 1933 and, being Jewish, did not go back to Germany and he settled in the United States, becoming an American citizen in 1940. This eventually led to what would become the Manhattan Project, Einstein supported defending the Allied forces, but generally denounced the idea of using the newly discovered nuclear fission as a weapon. Later, with the British philosopher Bertrand Russell, Einstein signed the Russell–Einstein Manifesto, Einstein was affiliated with the Institute for Advanced Study in Princeton, New Jersey, until his death in 1955. Einstein published more than 300 scientific papers along with over 150 non-scientific works, on 5 December 2014, universities and archives announced the release of Einsteins papers, comprising more than 30,000 unique documents. Einsteins intellectual achievements and originality have made the word Einstein synonymous with genius, Albert Einstein was born in Ulm, in the Kingdom of Württemberg in the German Empire, on 14 March 1879. His parents were Hermann Einstein, a salesman and engineer, the Einsteins were non-observant Ashkenazi Jews, and Albert attended a Catholic elementary school in Munich from the age of 5 for three years. At the age of 8, he was transferred to the Luitpold Gymnasium, the loss forced the sale of the Munich factory. In search of business, the Einstein family moved to Italy, first to Milan, when the family moved to Pavia, Einstein stayed in Munich to finish his studies at the Luitpold Gymnasium. His father intended for him to electrical engineering, but Einstein clashed with authorities and resented the schools regimen. He later wrote that the spirit of learning and creative thought was lost in strict rote learning, at the end of December 1894, he travelled to Italy to join his family in Pavia, convincing the school to let him go by using a doctors note. During his time in Italy he wrote an essay with the title On the Investigation of the State of the Ether in a Magnetic Field
Albert Einstein
–
Albert Einstein in 1921
Albert Einstein
–
Einstein at the age of 3 in 1882
Albert Einstein
–
Albert Einstein in 1893 (age 14)
Albert Einstein
–
Einstein's matriculation certificate at the age of 17, showing his final grades from the Argovian cantonal school (Aargauische Kantonsschule, on a scale of 1–6, with 6 being the highest possible mark)
51.
Max Born
–
Max Born was a German physicist and mathematician who was instrumental in the development of quantum mechanics. He also made contributions to physics and optics and supervised the work of a number of notable physicists in the 1920s and 1930s. Born won the 1954 Nobel Prize in Physics for his research in Quantum Mechanics. He wrote his Ph. D. thesis on the subject of Stability of Elastica in a Plane and Space, in 1905, he began researching special relativity with Minkowski, and subsequently wrote his habilitation thesis on the Thomson model of the atom. In the First World War, after originally being placed as a radio operator, in 1921, Born returned to Göttingen, arranging another chair for his long-time friend and colleague James Franck. Under Born, Göttingen became one of the worlds foremost centres for physics, in 1925, Born and Werner Heisenberg formulated the matrix mechanics representation of quantum mechanics. The following year, he formulated the now-standard interpretation of the probability density function for ψ*ψ in the Schrödinger equation and his influence extended far beyond his own research. Max Delbrück, Siegfried Flügge, Friedrich Hund, Pascual Jordan, Maria Goeppert-Mayer, Lothar Wolfgang Nordheim, Robert Oppenheimer, in January 1933, the Nazi Party came to power in Germany, and Born, who was Jewish, was suspended. Max Born became a naturalised British subject on 31 August 1939 and he remained at Edinburgh until 1952. He retired to Bad Pyrmont, in West Germany, and died in a hospital in Göttingen on 5 January 1970. Max Born was born on 11 December 1882 in Breslau, which at the time of Borns birth was part of the Prussian Province of Silesia in the German Empire and she died when Max was four years old, on 29 August 1886. Max had a sister, Käthe, who was born in 1884, Wolfgang later became Professor of Art History at the City College of New York. Initially educated at the König-Wilhelm-Gymnasium in Breslau, Born entered the University of Breslau in 1901, the German university system allowed students to move easily from one university to another, so he spent summer semesters at Heidelberg University in 1902 and the University of Zurich in 1903. Fellow students at Breslau, Otto Toeplitz and Ernst Hellinger, told Born about the University of Göttingen, at Göttingen he found three renowned mathematicians, Felix Klein, David Hilbert and Hermann Minkowski. Very soon after his arrival, Born formed close ties to the two men. Being class scribe put Born into regular, invaluable contact with Hilbert, Hilbert became Borns mentor after selecting him to be the first to hold the unpaid, semi-official position of assistant. Borns introduction to Minkowski came through Borns stepmother, Bertha, as she knew Minkowski from dancing classes in Königsberg, the introduction netted Born invitations to the Minkowski household for Sunday dinners. In addition, while performing his duties as scribe and assistant, Borns relationship with Klein was more problematic
Max Born
–
Max Born (1882–1970)
Max Born
–
Solvay Conference, 1927. Born is second from the right in the second row, between Louis de Broglie and Niels Bohr.
Max Born
–
Born's gravestone in Göttingen is inscribed with the uncertainty principle, which he put on rigid mathematical footing.
52.
Statistical
–
Statistics is a branch of mathematics dealing with the collection, analysis, interpretation, presentation, and organization of data. In applying statistics to, e. g. a scientific, industrial, or social problem, populations can be diverse topics such as all people living in a country or every atom composing a crystal. Statistics deals with all aspects of data including the planning of data collection in terms of the design of surveys, statistician Sir Arthur Lyon Bowley defines statistics as Numerical statements of facts in any department of inquiry placed in relation to each other. When census data cannot be collected, statisticians collect data by developing specific experiment designs, representative sampling assures that inferences and conclusions can safely extend from the sample to the population as a whole. In contrast, an observational study does not involve experimental manipulation, inferences on mathematical statistics are made under the framework of probability theory, which deals with the analysis of random phenomena. A standard statistical procedure involves the test of the relationship between two data sets, or a data set and a synthetic data drawn from idealized model. A hypothesis is proposed for the relationship between the two data sets, and this is compared as an alternative to an idealized null hypothesis of no relationship between two data sets. Rejecting or disproving the hypothesis is done using statistical tests that quantify the sense in which the null can be proven false. Working from a hypothesis, two basic forms of error are recognized, Type I errors and Type II errors. Multiple problems have come to be associated with this framework, ranging from obtaining a sufficient sample size to specifying an adequate null hypothesis, measurement processes that generate statistical data are also subject to error. Many of these errors are classified as random or systematic, the presence of missing data or censoring may result in biased estimates and specific techniques have been developed to address these problems. Statistics continues to be an area of research, for example on the problem of how to analyze Big data. Statistics is a body of science that pertains to the collection, analysis, interpretation or explanation. Some consider statistics to be a mathematical science rather than a branch of mathematics. While many scientific investigations make use of data, statistics is concerned with the use of data in the context of uncertainty, mathematical techniques used for this include mathematical analysis, linear algebra, stochastic analysis, differential equations, and measure-theoretic probability theory. In applying statistics to a problem, it is practice to start with a population or process to be studied. Populations can be diverse topics such as all living in a country or every atom composing a crystal. Ideally, statisticians compile data about the entire population and this may be organized by governmental statistical institutes
Statistical
–
Scatter plots are used in descriptive statistics to show the observed relationships between different variables.
Statistical
–
More probability density is found as one gets closer to the expected (mean) value in a normal distribution. Statistics used in standardized testing assessment are shown. The scales include standard deviations, cumulative percentages, percentile equivalents, Z-scores, T-scores, standard nines, and percentages in standard nines.
Statistical
–
Gerolamo Cardano, the earliest pioneer on the mathematics of probability.
Statistical
–
Karl Pearson, a founder of mathematical statistics.
53.
Class membership probabilities
–
Probabilistic classifiers provide classification with a degree of certainty, which can be useful in its own right, or when combining classifiers into ensembles. Probabilistic classifiers generalize this notion of classifiers, instead of functions, they are conditional distributions Pr, meaning that for a given x ∈ X, they assign probabilities to all y ∈ Y. Hard classification can then be using the optimal decision rule y ^ = arg max y Pr or, in English. Binary probabilistic classifiers are also called binomial regression models in statistics, in econometrics, probabilistic classification in general is called discrete choice. Some classification models, such as naive Bayes, logistic regression, other models such as support vector machines are not, but methods exist to turn them into probabilistic classifiers. Some models, such as regression, are conditionally trained. Not all classification models are probabilistic, and some that are, notably naive Bayes classifiers, decision trees and boosting methods. For classification models that produce some kind of score on their outputs, for the binary case, a common approach is to apply Platt scaling, which learns a logistic regression model on the scores. An alternative method using isotonic regression is generally superior to Platts method when sufficient training data is available, commonly used loss functions for probabilistic classification include log loss and the mean squared error between the predicted and the true probability distributions. The former of these is used to train logistic models. A method used to assign scores to pairs of predicted probabilities and actual discrete outcomes, so that different predictive methods can be compared, is called a scoring rule
Class membership probabilities
–
Machine learning and data mining
54.
Heuristics in judgment and decision-making
–
In psychology, heuristics are simple, efficient rules which people often use to form judgments and make decisions. They are mental shortcuts that usually involve focusing on one aspect of a complex problem and these rules work well under most circumstances, but they can lead to systematic deviations from logic, probability or rational choice theory. The resulting errors are called cognitive biases and many different types have been documented and these have been shown to affect peoples choices in situations like valuing a house, deciding the outcome of a legal case, or making an investment decision. Heuristics usually govern automatic, intuitive judgments but can also be used as deliberate mental strategies when working from limited information, Cognitive scientist Herbert A. Simon originally proposed that human judgments are limited by available information, time contraints, and cognitive limitations, calling this bounded rationality. In the early 1970s, psychologists Amos Tversky and Daniel Kahneman demonstrated three heuristics that underlie a range of intuitive judgments. These findings set in motion the heuristics and biases research program, which studies how people make real-world judgments and this research challenged the idea that human beings are rational actors, but provided a theory of information processing to explain how people make estimates or choices. This heuristics-and-biases tradition has been criticised by Gerd Gigerenzer and others for being too focused on how heuristics lead to errors, the critics argue that heuristics can be seen as rational in an underlying sense. According to this perspective, heuristics are good enough for most purposes without being too demanding on the brains resources, another theoretical perspective sees heuristics as fully rational in that they are rapid, can be made without full information and can be as accurate as more complicated procedures. By understanding the role of heuristics in human psychology, marketers and other persuaders can influence decisions, in their initial research, Tversky and Kahneman proposed three heuristics—availability, representativeness, and anchoring and adjustment. Subsequent work has identified many more, Heuristics that underlie judgment are called judgment heuristics. Another type, called heuristics, are used to judge the desirability of possible choices. In psychology, availability is the ease with which an idea can be brought to mind. When people estimate how likely or how frequent an event is on the basis of its availability, when an infrequent event can be brought easily and vividly to mind, people tend to overestimate its likelihood. For example, people overestimate their likelihood of dying in an event such as a tornado or terrorism. Dramatic, violent deaths are more highly publicised and therefore have a higher availability. On the other hand, common but mundane events are hard to bring to mind and these include deaths from suicides, strokes, and diabetes. This heuristic is one of the reasons why people are easily swayed by a single. It may also play a role in the appeal of lotteries, to buying a ticket
Heuristics in judgment and decision-making
–
The amount of money people will pay in an auction for a bottle of wine can be influenced by considering an arbitrary two-digit number.
Heuristics in judgment and decision-making
–
A visual example of attribute substitution. This illusion works because the 2D size of parts of the scene is judged on the basis of 3D (perspective) size, which is rapidly calculated by the visual system.
55.
Probability density function
–
In a more precise sense, the PDF is used to specify the probability of the random variable falling within a particular range of values, as opposed to taking on any one value. The probability density function is everywhere, and its integral over the entire space is equal to one. The terms probability distribution function and probability function have also sometimes used to denote the probability density function. However, this use is not standard among probabilists and statisticians, further confusion of terminology exists because density function has also been used for what is here called the probability mass function. In general though, the PMF is used in the context of random variables. Suppose a species of bacteria typically lives 4 to 6 hours, what is the probability that a bacterium lives exactly 5 hours. A lot of bacteria live for approximately 5 hours, but there is no chance that any given bacterium dies at exactly 5.0000000000, instead we might ask, What is the probability that the bacterium dies between 5 hours and 5.01 hours. Lets say the answer is 0.02, next, What is the probability that the bacterium dies between 5 hours and 5.001 hours. The answer is probably around 0.002, since this is 1/10th of the previous interval, the probability that the bacterium dies between 5 hours and 5.0001 hours is probably about 0.0002, and so on. In these three examples, the ratio / is approximately constant, and equal to 2 per hour, for example, there is 0.02 probability of dying in the 0. 01-hour interval between 5 and 5.01 hours, and =2 hour−1. This quantity 2 hour−1 is called the probability density for dying at around 5 hours, therefore, in response to the question What is the probability that the bacterium dies at 5 hours. A literally correct but unhelpful answer is 0, but an answer can be written as dt. This is the probability that the bacterium dies within a window of time around 5 hours. For example, the probability that it lives longer than 5 hours, there is a probability density function f with f =2 hour−1. The integral of f over any window of time is the probability that the dies in that window. A probability density function is most commonly associated with absolutely continuous univariate distributions, a random variable X has density fX, where fX is a non-negative Lebesgue-integrable function, if, Pr = ∫ a b f X d x. That is, f is any function with the property that. In the continuous univariate case above, the measure is the Lebesgue measure
Probability density function
–
Boxplot and probability density function of a normal distribution N (0, σ 2).
56.
Sample space
–
In probability theory, the sample space of an experiment or random trial is the set of all possible outcomes or results of that experiment. A sample space is denoted using set notation, and the possible outcomes are listed as elements in the set. It is common to refer to a space by the labels S, Ω. For example, if the experiment is tossing a coin, the space is typically the set. For tossing two coins, the sample space would be. For tossing a single six-sided die, the sample space is. A well-defined sample space is one of three elements in a probabilistic model, the other two are a well-defined set of possible events and a probability assigned to each event. For many experiments, there may be more than one plausible sample space available, for example, when drawing a card from a standard deck of fifty-two playing cards, one possibility for the sample space could be the various ranks, while another could be the suits. Still other sample spaces are possible, such as if some cards have been flipped when shuffling, some treatments of probability assume that the various outcomes of an experiment are always defined so as to be equally likely. The result of this is every possible combination of individuals who could be chosen for the sample is also equally likely. In an elementary approach to probability, any subset of the space is usually called an event. However, this rise to problems when the sample space is infinite. Under this definition only measurable subsets of the space, constituting a σ-algebra over the sample space itself, are considered events. Probability space Space Set Event σ-algebra
Sample space
–
Flipping a coin leads to a sample space composed of two outcomes that are almost equally likely.
Sample space
–
Up or down? Flipping a brass tack leads to a sample space composed of two outcomes that are not equally likely.
57.
Real number
–
In mathematics, a real number is a value that represents a quantity along a line. The adjective real in this context was introduced in the 17th century by René Descartes, the real numbers include all the rational numbers, such as the integer −5 and the fraction 4/3, and all the irrational numbers, such as √2. Included within the irrationals are the numbers, such as π. Real numbers can be thought of as points on a long line called the number line or real line. Any real number can be determined by a possibly infinite decimal representation, such as that of 8.632, the real line can be thought of as a part of the complex plane, and complex numbers include real numbers. These descriptions of the numbers are not sufficiently rigorous by the modern standards of pure mathematics. All these definitions satisfy the definition and are thus equivalent. The statement that there is no subset of the reals with cardinality greater than ℵ0. Simple fractions were used by the Egyptians around 1000 BC, the Vedic Sulba Sutras in, c.600 BC, around 500 BC, the Greek mathematicians led by Pythagoras realized the need for irrational numbers, in particular the irrationality of the square root of 2. Arabic mathematicians merged the concepts of number and magnitude into a general idea of real numbers. In the 16th century, Simon Stevin created the basis for modern decimal notation, in the 17th century, Descartes introduced the term real to describe roots of a polynomial, distinguishing them from imaginary ones. In the 18th and 19th centuries, there was work on irrational and transcendental numbers. Johann Heinrich Lambert gave the first flawed proof that π cannot be rational, Adrien-Marie Legendre completed the proof, Évariste Galois developed techniques for determining whether a given equation could be solved by radicals, which gave rise to the field of Galois theory. Charles Hermite first proved that e is transcendental, and Ferdinand von Lindemann, lindemanns proof was much simplified by Weierstrass, still further by David Hilbert, and has finally been made elementary by Adolf Hurwitz and Paul Gordan. The development of calculus in the 18th century used the set of real numbers without having defined them cleanly. The first rigorous definition was given by Georg Cantor in 1871, in 1874, he showed that the set of all real numbers is uncountably infinite but the set of all algebraic numbers is countably infinite. Contrary to widely held beliefs, his first method was not his famous diagonal argument, the real number system can be defined axiomatically up to an isomorphism, which is described hereafter. Another possibility is to start from some rigorous axiomatization of Euclidean geometry, from the structuralist point of view all these constructions are on equal footing
Real number
–
A symbol of the set of real numbers (ℝ)
58.
Journal of the American Statistical Association
–
The Journal of the American Statistical Association is the primary journal published by the American Statistical Association, the main professional body for statisticians in the United States. It is published four times a year and it had an impact factor of 2.063 in 2010, tenth highest in the Statistics and Probability category of Journal Citation Reports. In a 2003 survey of statisticians, the Journal of the American Statistical Association was ranked first, among all journals, for Applications of Statistics, the predecessor of this journal started in 1888 with the name Publications of the American Statistical Association. It became Quarterly publications of the American Statistical Association in 1912, Journal of the American Statistical Association
Journal of the American Statistical Association
–
Journal of the American Statistical Association
59.
ArXiv
–
In many fields of mathematics and physics, almost all scientific papers are self-archived on the arXiv repository. Begun on August 14,1991, arXiv. org passed the half-million article milestone on October 3,2008, by 2014 the submission rate had grown to more than 8,000 per month. The arXiv was made possible by the low-bandwidth TeX file format, around 1990, Joanne Cohn began emailing physics preprints to colleagues as TeX files, but the number of papers being sent soon filled mailboxes to capacity. Additional modes of access were added, FTP in 1991, Gopher in 1992. The term e-print was quickly adopted to describe the articles and its original domain name was xxx. lanl. gov. Due to LANLs lack of interest in the rapidly expanding technology, in 1999 Ginsparg changed institutions to Cornell University and it is now hosted principally by Cornell, with 8 mirrors around the world. Its existence was one of the factors that led to the current movement in scientific publishing known as open access. Mathematicians and scientists regularly upload their papers to arXiv. org for worldwide access, Ginsparg was awarded a MacArthur Fellowship in 2002 for his establishment of arXiv. The annual budget for arXiv is approximately $826,000 for 2013 to 2017, funded jointly by Cornell University Library, annual donations were envisaged to vary in size between $2,300 to $4,000, based on each institution’s usage. As of 14 January 2014,174 institutions have pledged support for the period 2013–2017 on this basis, in September 2011, Cornell University Library took overall administrative and financial responsibility for arXivs operation and development. Ginsparg was quoted in the Chronicle of Higher Education as saying it was supposed to be a three-hour tour, however, Ginsparg remains on the arXiv Scientific Advisory Board and on the arXiv Physics Advisory Committee. The lists of moderators for many sections of the arXiv are publicly available, additionally, an endorsement system was introduced in 2004 as part of an effort to ensure content that is relevant and of interest to current research in the specified disciplines. Under the system, for categories that use it, an author must be endorsed by an established arXiv author before being allowed to submit papers to those categories. Endorsers are not asked to review the paper for errors, new authors from recognized academic institutions generally receive automatic endorsement, which in practice means that they do not need to deal with the endorsement system at all. However, the endorsement system has attracted criticism for allegedly restricting scientific inquiry, perelman appears content to forgo the traditional peer-reviewed journal process, stating, If anybody is interested in my way of solving the problem, its all there – let them go and read about it. The arXiv generally re-classifies these works, e. g. in General mathematics, papers can be submitted in any of several formats, including LaTeX, and PDF printed from a word processor other than TeX or LaTeX. The submission is rejected by the software if generating the final PDF file fails, if any image file is too large. ArXiv now allows one to store and modify an incomplete submission, the time stamp on the article is set when the submission is finalized
ArXiv
–
arXiv
ArXiv
–
A screenshot of the arXiv taken in 1994, using the browser NCSA Mosaic. At the time, HTML forms were a new technology.
60.
Cambridge University Press
–
Cambridge University Press is the publishing business of the University of Cambridge. Granted letters patent by Henry VIII in 1534, it is the worlds oldest publishing house and it also holds letters patent as the Queens Printer. The Presss mission is To further the Universitys mission by disseminating knowledge in the pursuit of education, learning, Cambridge University Press is a department of the University of Cambridge and is both an academic and educational publisher. With a global presence, publishing hubs, and offices in more than 40 countries. Its publishing includes journals, monographs, reference works, textbooks. Cambridge University Press is an enterprise that transfers part of its annual surplus back to the university. Cambridge University Press is both the oldest publishing house in the world and the oldest university press and it originated from Letters Patent granted to the University of Cambridge by Henry VIII in 1534, and has been producing books continuously since the first University Press book was printed. Cambridge is one of the two privileged presses, authors published by Cambridge have included John Milton, William Harvey, Isaac Newton, Bertrand Russell, and Stephen Hawking. In 1591, Thomass successor, John Legate, printed the first Cambridge Bible, the London Stationers objected strenuously, claiming that they had the monopoly on Bible printing. The universitys response was to point out the provision in its charter to print all manner of books. In July 1697 the Duke of Somerset made a loan of £200 to the university towards the house and presse and James Halman, Registrary of the University. It was in Bentleys time, in 1698, that a body of scholars was appointed to be responsible to the university for the Presss affairs. The Press Syndicates publishing committee still meets regularly, and its role still includes the review, John Baskerville became University Printer in the mid-eighteenth century. Baskervilles concern was the production of the finest possible books using his own type-design, a technological breakthrough was badly needed, and it came when Lord Stanhope perfected the making of stereotype plates. This involved making a mould of the surface of a page of type. The Press was the first to use this technique, and in 1805 produced the technically successful, under the stewardship of C. J. Clay, who was University Printer from 1854 to 1882, the Press increased the size and scale of its academic and educational publishing operation. An important factor in this increase was the inauguration of its list of schoolbooks, during Clays administration, the Press also undertook a sizable co-publishing venture with Oxford, the Revised Version of the Bible, which was begun in 1870 and completed in 1885. It was Wright who devised the plan for one of the most distinctive Cambridge contributions to publishing—the Cambridge Histories, the Cambridge Modern History was published between 1902 and 1912
Cambridge University Press
–
The University Printing House, on the main site of the Press
Cambridge University Press
–
The letters patent of Cambridge University Press by Henry VIII allow the Press to print "all manner of books". The fine initial with the king's portrait inside it and the large first line of script are still discernible.
Cambridge University Press
–
The Pitt Building in Cambridge, which used to be the headquarters of Cambridge University Press, and now serves as a conference centre for the Press.
Cambridge University Press
–
On the main site of the Press
61.
History of logic
–
The history of logic deals with the study of the development of the science of valid inference. Formal logics developed in ancient times in China, India, Greek methods, particularly Aristotelian logic as found in the Organon, found wide application and acceptance in Western science and mathematics for millennia. The Stoics, especially Chrysippus, began the development of predicate logic, christian and Islamic philosophers such as Boethius and William of Ockham further developed Aristotles logic in the Middle Ages, reaching a high point in the mid-fourteenth century. The period between the fourteenth century and the beginning of the century saw largely decline and neglect. Empirical methods ruled the day, as evidenced by Sir Francis Bacons Novum Organon of 1620, valid reasoning has been employed in all periods of human history. However, logic studies the principles of reasoning, inference. It is probable that the idea of demonstrating a conclusion first arose in connection with geometry, the ancient Egyptians discovered geometry, including the formula for the volume of a truncated pyramid. Ancient Babylon was also skilled in mathematics, while the ancient Egyptians empirically discovered some truths of geometry, the great achievement of the ancient Greeks was to replace empirical methods by demonstrative proof. Both Thales and Pythagoras of the Pre-Socratic philosophers seem aware of geometrys methods, fragments of early proofs are preserved in the works of Plato and Aristotle, and the idea of a deductive system was probably known in the Pythagorean school and the Platonic Academy. The proofs of Euclid of Alexandria are a paradigm of Greek geometry, the three basic principles of geometry are as follows, Certain propositions must be accepted as true without demonstration, such a proposition is known as an axiom of geometry. Every proposition that is not an axiom of geometry must be demonstrated as following from the axioms of geometry, the proof must be formal, that is, the derivation of the proposition must be independent of the particular subject matter in question. Further evidence that early Greek thinkers were concerned with the principles of reasoning is found in the fragment called dissoi logoi and this is part of a protracted debate about truth and falsity. Thales was said to have had a sacrifice in celebration of discovering Thales Theorem just as Pythagoras had the Pythagorean Theorem, Indian and Babylonian mathematicians knew his theorem for special cases before he proved it. It is believed that Thales learned that an angle inscribed in a semicircle is a right angle during his travels to Babylon, before 520 BC, on one of his visits to Egypt or Greece, Pythagoras might have met the c.54 years older Thales. The systematic study of proof seems to have begun with the school of Pythagoras in the sixth century BC. Indeed, the Pythagoreans, believing all was number, are the first philosophers to emphasize rather than matter. He is known for his obscure sayings and this logos holds always but humans always prove unable to understand it, both before hearing it and when they have first heard it. But other people fail to notice what they do when awake, in contrast to Heraclitus, Parmenides held that all is one and nothing changes
History of logic
–
Plato's academy
History of logic
–
Aristotle's logic was still influential in the Renaissance
History of logic
–
Chrysippus of Soli
History of logic
–
A text by Avicenna, founder of Avicennian logic
62.
Metamathematics
–
Metamathematics is the study of mathematics itself using mathematical methods. This study produces metatheories, which are mathematical theories about other mathematical theories, emphasis on metamathematics owes itself to David Hilberts attempt to secure the foundations of mathematics in the early part of the 20th Century. Metamathematics provides a mathematical technique for investigating a great variety of foundation problems for mathematics. An important feature of metamathematics is its emphasis on differentiating between reasoning from inside a system and from outside a system, an informal illustration of this is categorizing the proposition 2+2=4 as belonging to mathematics while categorizing the proposition 2+2=4 is valid as belonging to metamathematics. Something similar can be said around the well-known Russells paradox, Metamathematics was intimately connected to mathematical logic, so that the early histories of the two fields, during the late 19th and early 20th centuries, largely overlap. More recently, mathematical logic has often included the study of new pure mathematics, such as set theory, recursion theory and pure model theory, serious metamathematical reflection began with the work of Gottlob Frege, especially his Begriffsschrift. David Hilbert was the first to invoke the term metamathematics with regularity, in his hands, it meant something akin to contemporary proof theory, in which finitary methods are used to study various axiomatized mathematical theorems. Today, metalogic and metamathematics are largely synonymous with each other, the discovery of hyperbolic geometry had important philosophical consequences for Metamathematics. Before its discovery there was just one geometry and mathematics, the idea that another geometry existed was considered improbable, the uproar of the Boeotians came and went, and gave an impetus to metamathematics and great improvements in mathematical rigour, analytical philosophy and logic. Begriffsschrift is a book on logic by Gottlob Frege, published in 1879, Begriffsschrift is usually translated as concept writing or concept notation, the full title of the book identifies it as a formula language, modeled on that of arithmetic, of pure thought. Freges motivation for developing his formal approach to logic resembled Leibnizs motivation for his calculus ratiocinator, Frege went on to employ his logical calculus in his research on the foundations of mathematics, carried out over the next quarter century. As such, this project is of great importance in the history of mathematics and philosophy. One of the inspirations and motivations for PM was the earlier work of Gottlob Frege on logic. PM sought to avoid this problem by ruling out the creation of arbitrary sets. This was achieved by replacing the notion of a set with notion of a hierarchy of sets of different types. Contemporary mathematics, however, avoids paradoxes such as Russells in less unwieldy ways, gödels completeness theorem is a fundamental theorem in mathematical logic that establishes a correspondence between semantic truth and syntactic provability in first-order logic. It makes a link between model theory that deals with what is true in different models, and proof theory that studies what can be formally proven in particular formal systems. More formally, the theorem says that if a formula is logically valid then there is a finite deduction of the formula
Metamathematics
–
The title page of the shortened version of the Principia Mathematica to *56, an important work of metamathematics.
63.
Set theory
–
Set theory is a branch of mathematical logic that studies sets, which informally are collections of objects. Although any type of object can be collected into a set, set theory is applied most often to objects that are relevant to mathematics, the language of set theory can be used in the definitions of nearly all mathematical objects. The modern study of set theory was initiated by Georg Cantor, Set theory is commonly employed as a foundational system for mathematics, particularly in the form of Zermelo–Fraenkel set theory with the axiom of choice. Beyond its foundational role, set theory is a branch of mathematics in its own right, contemporary research into set theory includes a diverse collection of topics, ranging from the structure of the real number line to the study of the consistency of large cardinals. Mathematical topics typically emerge and evolve through interactions among many researchers, Set theory, however, was founded by a single paper in 1874 by Georg Cantor, On a Property of the Collection of All Real Algebraic Numbers. Since the 5th century BC, beginning with Greek mathematician Zeno of Elea in the West and early Indian mathematicians in the East, especially notable is the work of Bernard Bolzano in the first half of the 19th century. Modern understanding of infinity began in 1867–71, with Cantors work on number theory, an 1872 meeting between Cantor and Richard Dedekind influenced Cantors thinking and culminated in Cantors 1874 paper. Cantors work initially polarized the mathematicians of his day, while Karl Weierstrass and Dedekind supported Cantor, Leopold Kronecker, now seen as a founder of mathematical constructivism, did not. This utility of set theory led to the article Mengenlehre contributed in 1898 by Arthur Schoenflies to Kleins encyclopedia, in 1899 Cantor had himself posed the question What is the cardinal number of the set of all sets. Russell used his paradox as a theme in his 1903 review of continental mathematics in his The Principles of Mathematics, in 1906 English readers gained the book Theory of Sets of Points by William Henry Young and his wife Grace Chisholm Young, published by Cambridge University Press. The momentum of set theory was such that debate on the paradoxes did not lead to its abandonment, the work of Zermelo in 1908 and Abraham Fraenkel in 1922 resulted in the set of axioms ZFC, which became the most commonly used set of axioms for set theory. The work of such as Henri Lebesgue demonstrated the great mathematical utility of set theory. Set theory is used as a foundational system, although in some areas category theory is thought to be a preferred foundation. Set theory begins with a binary relation between an object o and a set A. If o is a member of A, the notation o ∈ A is used, since sets are objects, the membership relation can relate sets as well. A derived binary relation between two sets is the relation, also called set inclusion. If all the members of set A are also members of set B, then A is a subset of B, for example, is a subset of, and so is but is not. As insinuated from this definition, a set is a subset of itself, for cases where this possibility is unsuitable or would make sense to be rejected, the term proper subset is defined
Set theory
–
Georg Cantor
Set theory
–
A Venn diagram illustrating the intersection of two sets.
64.
A priori and a posteriori
–
The Latin phrases a priori and a posteriori are philosophical terms of art popularized by Immanuel Kants Critique of Pure Reason, one of the most influential works in the history of philosophy. These terms are used with respect to reasoning to distinguish necessary conclusions from first premises from conclusions based on sense observation, a posteriori knowledge or justification is dependent on experience or empirical evidence, as with most aspects of science and personal knowledge. There are many points of view on two types of knowledge, and their relationship gives rise to one of the oldest problems in modern philosophy. The terms a priori and a posteriori are primarily used as adjectives to modify the noun knowledge, however, a priori is sometimes used to modify other nouns, such as truth. Philosophers also may use apriority and aprioricity as nouns to refer to the quality of being a priori, although definitions and use of the terms have varied in the history of philosophy, they have consistently labeled two separate epistemological notions. See also the related distinctions, deductive/inductive, analytic/synthetic, necessary/contingent, the intuitive distinction between a priori and a posteriori knowledge is best seen in examples. A priori Consider the proposition, If George V reigned at least four days and this is something that one knows a priori, because it expresses a statement that one can derive by reason alone. A posteriori Compare this with the proposition expressed by the sentence and this is something that one must come to know a posteriori, because it expresses an empirical fact unknowable by reason alone. Several philosophers reacting to Kant sought to explain a priori knowledge without appealing to, as Paul Boghossian explains and that has never been described in satisfactory terms. One theory, popular among the positivists of the early 20th century, is what Boghossian calls the analytic explanation of the a priori. The distinction between analytic and synthetic propositions was first introduced by Kant, in short, proponents of this explanation claimed to have reduced a dubious metaphysical faculty of pure reason to a legitimate linguistic notion of analyticity. However, the explanation of a priori knowledge has undergone several criticisms. Most notably, Quine argued that the distinction is illegitimate. Quine states, But for all its a priori reasonableness, a boundary between analytic and synthetic statements simply has not been drawn and that there is such a distinction to be drawn at all is an unempirical dogma of empiricists, a metaphysical article of faith. While the soundness of Quines critique is highly disputed, it had an effect on the project of explaining the a priori in terms of the analytic. The metaphysical distinction between necessary and contingent truths has also related to a priori and a posteriori knowledge. A proposition that is true is one whose negation is self-contradictory. Consider the proposition that all bachelors are unmarried and its negation, the proposition that some bachelors are married, is incoherent, because the concept of being unmarried is part of the concept of being a bachelor
A priori and a posteriori
–
Time Portal
65.
Definition
–
A definition is a statement of the meaning of a term. Definitions can be classified into two categories, intensional definitions and extensional definitions. Another important category of definitions is the class of ostensive definitions, a term may have many different senses and multiple meanings, and thus require multiple definitions. In mathematics, a definition is used to give a meaning to a new term. Definitions and axioms are the basis on all of mathematics is constructed. In modern usage, a definition is something, typically expressed in words, the word or group of words that is to be defined is called the definiendum, and the word, group of words, or action that defines it is called the definiens. In the definition An elephant is a large gray animal native to Asia and Africa, the elephant is the definiendum. Note that the definiens is not the meaning of the word defined, there are many sub-types of definitions, often specific to a given field of knowledge or study. An intensional definition, also called a connotative definition, specifies the necessary, any definition that attempts to set out the essence of something, such as that by genus and differentia, is an intensional definition. An extensional definition, also called a denotative definition, of a concept or term specifies its extension and it is a list naming every object that is a member of a specific set. An extensional definition would be the list of wrath, greed, sloth, pride, lust, envy, a genus–differentia definition is a type of intensional definition that takes a large category and narrows it down to a smaller category by a distinguishing characteristic. The differentia, The portion of the new definition that is not provided by the genus, for example, consider the following genus-differentia definitions, a triangle, A plane figure that has three straight bounding sides. A quadrilateral, A plane figure that has four straight bounding sides and those definitions can be expressed as a genus and two differentiae. It is possible to have two different genus-differentia definitions that describe the same term, especially when the term describes the overlap of two large categories, for instance, both of these genus-differentia definitions of square are equally acceptable, a square, a rectangle that is a rhombus. A square, a rhombus that is a rectangle, thus, a square is a member of both the genus rectangle and the genus rhombus. One important form of the definition is ostensive definition. This gives the meaning of a term by pointing, in the case of an individual, to the thing itself, or in the case of a class, to examples of the right kind. So one can explain who Alice is by pointing her out to another, or what a rabbit is by pointing at several, the process of ostensive definition itself was critically appraised by Ludwig Wittgenstein
Definition
–
A definition states the meaning of a word using other words. This is sometimes challenging. Common dictionaries contain lexical, descriptive definitions, but there are various types of definition - all with different purposes and focuses.
66.
Logical truth
–
Logical truth is one of the most fundamental concepts in logic, and there are different theories on its nature. A logical truth is a statement which is true, and remains true under all reinterpretations of its components other than its logical constants and it is a type of analytic statement. All of philosophical logic can be thought of as providing accounts of the nature of logical truth, Logical truths are truths which are considered to be necessarily true. This is to say that they are considered to be such that they could not be untrue and it must be true in every sense of intuition, practices, and bodies of beliefs. However, it is not universally agreed that there are any statements which are necessarily true, a logical truth is considered by some philosophers to be a statement which is true in all possible worlds. This is contrasted with facts which are true in this world, as it has historically unfolded, later, with the rise of formal logic a logical truth was considered to be a statement which is true under all possible interpretations. Empiricists commonly respond to this objection by arguing that logical truths, are analytic, Logical truths, being analytic statements, do not contain any information about any matters of fact. Other than logical truths, there is also a class of analytic statements. The characteristic of such a statement is that it can be turned into a logical truth by substituting synonyms for synonyms salva veritate, can be turned into No unmarried man is married. By substituting unmarried man for its synonym bachelor, in his essay, Two Dogmas of Empiricism, the philosopher W. V. O. Quine called into question the distinction between analytic and synthetic statements, in his conclusion, Quine rejects that logical truths are necessary truths. Instead he posits that the truth-value of any statement can be changed, including logical truths, considering different interpretations of the same statement leads to the notion of truth value. The simplest approach to truth values means that the statement may be true in one case, in one sense of the term tautology, it is any type of formula or proposition which turns out to be true under any possible interpretation of its terms. This is synonymous to logical truth, however, the term tautology is also commonly used to refer to what could more specifically be called truth-functional tautologies. Not all logical truths are tautologies of such a kind, Logical constants, including logical connectives and quantifiers, can all be reduced conceptually to logical truth. For instance, two statements or more are logically incompatible if, and only if their conjunction is logically false, one statement logically implies another when it is logically incompatible with the negation of the other. A statement is true if, and only if its opposite is logically false. The opposite statements must contradict one another, in this way all logical connectives can be expressed in terms of preserving logical truth
Logical truth
–
Functional:
67.
Reason
–
Reason, or an aspect of it, is sometimes referred to as rationality. Reasoning is associated with thinking, cognition, and intellect, along these lines, a distinction is often drawn between discursive reason, reason proper, and intuitive reason, in which the reasoning process—however valid—tends toward the personal and the opaque. Reason, like habit or intuition, is one of the ways by which thinking comes from one idea to a related idea. For example, it is the means by which rational beings understand themselves to think about cause and effect, truth and falsehood, and what is good or bad. It is also identified with the ability to self-consciously change beliefs, attitudes, traditions, and institutions. In contrast to reason as a noun, a reason is a consideration which explains or justifies some event, phenomenon. The field of logic studies ways in which human beings reason formally through argument, the field of automated reasoning studies how reasoning may or may not be modeled computationally. Animal psychology considers the question of whether animals other than humans can reason, the original Greek term was λόγος logos, the root of the modern English word logic but also a word which could mean for example speech or explanation or an account. As a philosophical term logos was translated in its non-linguistic senses in Latin as ratio and this was originally not just a translation used for philosophy, but was also commonly a translation for logos in the sense of an account of money. French raison is derived directly from Latin, and this is the source of the English word reason. Some philosophers, Thomas Hobbes for example, also used the word ratiocination as a synonym for reasoning, Philosophy can be described as a way of life based upon reason, and in the other direction reason has been one of the major subjects of philosophical discussion since ancient times. Reason is often said to be reflexive, or self-correcting, and it has been defined in different ways, at different times, by different thinkers about human nature. Perhaps starting with Pythagoras or Heraclitus, the cosmos is even said to have reason, Reason, by this account, is not just one characteristic that humans happen to have, and that influences happiness amongst other characteristics. Within the human mind or soul, reason was described by Plato as being the monarch which should rule over the other parts, such as spiritedness. Aristotle, Platos student, defined human beings as rational animals and he defined the highest human happiness or well being as a life which is lived consistently, excellently and completely in accordance with reason. The conclusions to be drawn from the discussions of Aristotle and Plato on this matter are amongst the most debated in the history of philosophy. For example, in the neo-platonist account of Plotinus, the cosmos has one soul, which is the seat of all reason, Reason is for Plotinus both the provider of form to material things, and the light which brings individuals souls back into line with their source. The early modern era was marked by a number of significant changes in the understanding of reason, one of the most important of these changes involved a change in the metaphysical understanding of human beings
Reason
–
Francisco de Goya, The Sleep of Reason Produces Monsters (El sueño de la razón produce monstruos), c. 1797
Reason
–
René Descartes
Reason
–
Dan Sperber believes that reasoning in groups is more effective and promotes their evolutionary fitness.
68.
List of paradoxes
–
This is a list of paradoxes, grouped thematically. The grouping is approximate, as paradoxes may fit more than one category. Because of varying definitions of the paradox, some of the following are not considered to be paradoxes by everyone. This list collects only scenarios that have called a paradox by at least one source and have their own article. Although considered paradoxes, some of these are based on fallacious reasoning, informally, the term is often used to describe a counter-intuitive result. Barbershop paradox, The supposition that if one of two simultaneous assumptions leads to a contradiction, the assumption is also disproved leads to paradoxical consequences. Not to be confused with the Barber paradox, what the Tortoise Said to Achilles, Whatever Logic is good enough to tell me is worth writing down. Also known as Carrolls paradox, not to be confused with the paradox of the same name. Catch-22, A situation in which someone is in need of something that can only be had by not being in need of it. A soldier who wants to be declared insane in order to combat is deemed not insane for that very reason. Drinker paradox, In any pub there is a customer of whom it is true to say, if that customer drinks, Paradox of entailment, Inconsistent premises always make an argument valid. Raven paradox, Observing a green apple increases the likelihood of all ravens being black, ross paradox, Disjunction introduction poses a problem for imperative inference by seemingly permitting arbitrary imperatives to be inferred. Unexpected hanging paradox, The day of the hanging will be a surprise, so it cannot happen at all, the surprise examination and Bottle Imp paradox use similar logic. Barber paradox, A barber shaves all and only men who do not shave themselves. Bhartrharis paradox, The thesis that there are things which are unnameable conflicts with the notion that something is named by calling it unnameable. Berry paradox, The phrase the first number not nameable in under ten words appears to name it in nine words, Paradox of the Court, A law student agrees to pay his teacher after winning his first case. The teacher then sues the student for payment, currys paradox, If this sentence is true, then Santa Claus exists. Epimenides paradox, A Cretan says, All Cretans are liars and this paradox works in mainly the same way as the Liar paradox
List of paradoxes
–
Abilene
List of paradoxes
–
The Monty Hall problem: which door do you choose?
69.
Integrated Authority File
–
The Integrated Authority File or GND is an international authority file for the organisation of personal names, subject headings and corporate bodies from catalogues. It is used mainly for documentation in libraries and increasingly also by archives, the GND is managed by the German National Library in cooperation with various regional library networks in German-speaking Europe and other partners. The GND falls under the Creative Commons Zero license, the GND specification provides a hierarchy of high-level entities and sub-classes, useful in library classification, and an approach to unambiguous identification of single elements. It also comprises an ontology intended for knowledge representation in the semantic web, available in the RDF format
Integrated Authority File
–
GND screenshot