1.
Probability theory
–
Probability theory is the branch of mathematics concerned with probability, the analysis of random phenomena. It is not possible to predict precisely results of random events, two representative mathematical results describing such patterns are the law of large numbers and the central limit theorem. As a mathematical foundation for statistics, probability theory is essential to human activities that involve quantitative analysis of large sets of data. Methods of probability theory also apply to descriptions of complex systems given only partial knowledge of their state, a great discovery of twentieth century physics was the probabilistic nature of physical phenomena at atomic scales, described in quantum mechanics. Christiaan Huygens published a book on the subject in 1657 and in the 19th century, initially, probability theory mainly considered discrete events, and its methods were mainly combinatorial. Eventually, analytical considerations compelled the incorporation of continuous variables into the theory and this culminated in modern probability theory, on foundations laid by Andrey Nikolaevich Kolmogorov. Kolmogorov combined the notion of space, introduced by Richard von Mises. This became the mostly undisputed axiomatic basis for modern probability theory, most introductions to probability theory treat discrete probability distributions and continuous probability distributions separately. The more mathematically advanced measure theory-based treatment of probability covers the discrete, continuous, consider an experiment that can produce a number of outcomes. The set of all outcomes is called the space of the experiment. The power set of the space is formed by considering all different collections of possible results. For example, rolling an honest die produces one of six possible results, one collection of possible results corresponds to getting an odd number. Thus, the subset is an element of the set of the sample space of die rolls. In this case, is the event that the die falls on some odd number, If the results that actually occur fall in a given event, that event is said to have occurred. Probability is a way of assigning every event a value between zero and one, with the requirement that the event made up of all possible results be assigned a value of one, the probability that any one of the events, or will occur is 5/6. This is the same as saying that the probability of event is 5/6 and this event encompasses the possibility of any number except five being rolled. The mutually exclusive event has a probability of 1/6, and the event has a probability of 1, discrete probability theory deals with events that occur in countable sample spaces. Modern definition, The modern definition starts with a finite or countable set called the sample space, which relates to the set of all possible outcomes in classical sense, denoted by Ω
Probability theory
–
The normal distribution, a continuous probability distribution.
Probability theory
–
The Poisson distribution, a discrete probability distribution.
2.
Glossary of probability and statistics
–
The following is a glossary of terms used in the mathematical sciences statistics and probability. Alternative hypothesis atomic event Another name for elementary event bar chart bias 1, a sample that is not representative of the population 2. For example, how will my headache feel if I take aspirin, causal studies may be either experimental or observational. Conditional probability is written P, and is read the probability of A, given B confidence interval In inferential statistics, a CI is a range of plausible values for the population mean. For example, based on a study of sleep habits among 100 people and this is different from the sample mean, which can be measured directly. Confidence level Also known as a coefficient, the confidence level indicates the probability that the confidence interval captures the true population mean. For example, an interval with a 95 percent confidence level has a 95 percent chance of capturing the population mean. Technically, this means that, if the experiment were repeated many times,95 percent of the CIs would contain the population mean. Continuous variable correlation Also called correlation coefficient, a measure of the strength of linear relationship between two random variables. An example is the Pearson product-moment correlation coefficient, which is found by dividing the covariance of the two variables by the product of their standard deviations. The mean can be used as an expected value The sum of the probability of each possible outcome of the experiment multiplied by its payoff. Thus, it represents the amount one expects to win per bet if bets with identical odds are repeated many times. For example, the value of a six-sided die roll is 3.5. The concept is similar to the mean, the joint probability of A and B is written P or P. kurtosis A measure of the peakedness of the probability distribution of a real-valued random variable. For example, imagine pulling a ball with the number k from a bag of n balls. The marginal probability of A is written P, contrast with conditional probability mean 1. The expected value of a random variable 2, think of the result of a series of coin-flips. For example, if one wanted to test whether light has an effect on sleep and it is often symbolized as H0
Glossary of probability and statistics
–
Statistics
3.
Notation in probability and statistics
–
Probability theory and statistics have some commonly used conventions, in addition to standard mathematical notation and mathematical symbols. Random variables are written in upper case roman letters, X, Y. Particular realizations of a variable are written in corresponding lower case letters. For example x1, x2, …, xn could be a sample corresponding to the random variable X, P or P indicates the probability that events A and B both occur. P or P indicates the probability of either event A or event B occurring, σ-algebras are usually written with upper case calligraphic Probability density functions and probability mass functions are denoted by lower case letters, e. g. f. Cumulative distribution functions are denoted by upper case letters, e. g. F. e, greek letters are commonly used to denote unknown parameters. A tilde denotes has the probability distribution of, placing a hat, or caret, over a true parameter denotes an estimator of it, e. g. θ ^ is an estimator for θ. The arithmetic mean of a series of values x1, x2, xn is often denoted by placing an overbar over the symbol, e. g. x ¯, pronounced x bar. The α-level upper critical value of a probability distribution is the value exceeded with probability α, that is, column vectors are usually denoted by boldface lower case letters, e. g. x. The transpose operator is denoted by either a superscript T or a prime symbol, a row vector is written as the transpose of a column vector, e. g. xT or x′. Common abbreviations include, a. e. almost everywhere a. s. almost surely cdf cumulative distribution function cmf cumulative mass function df degrees of freedom i. i. d. COPSS Committee on Symbols and Notation, The American Statistician,19, 12–14, doi,10. 2307/2681417, JSTOR2681417 Earliest Uses of Symbols in Probability and Statistics, maintained by Jeff Miller
Notation in probability and statistics
–
Statistics
4.
Doubt
–
Doubt characterises a status in which the mind remains suspended between two contradictory propositions and unable to assent to either of them. Doubt on a level is indecision between belief and disbelief. Doubt involves uncertainty, distrust or lack of sureness of a fact, an action. Doubt questions a notion of a reality, and may involve delaying or rejecting relevant action out of concerns for mistakes or faults or appropriateness. Doubt sometimes tends to call on reason, Doubt may encourage people to hesitate before acting, and/or to apply more rigorous methods. Doubt may have particular importance as leading towards disbelief or non-acceptance, societally, doubt creates an atmosphere of distrust, being accusatory in nature and de facto alleging either foolishness or deceit on the part of another. Such a stance has been fostered in Western European society since the Enlightenment, in opposition to tradition, psychoanalytic theory attributes doubt to childhood, when the ego develops. Childhood experiences, these theories maintain, can plant doubt about ones abilities, cognitive mental as well as more spiritual approaches abound in response to the wide variety of potential causes for doubt. Behavioral therapy — in which a person systematically asks his own if the doubt has any real basis — uses rational. This method contrasts to those of say, the Buddhist faith, buddhism sees doubt as a negative attachment to ones perceived past and future. To let go of the history of ones life plays a central role in releasing the doubts — developed in. Partial or intermittent negative reinforcement can create a climate of fear. Descartes employed Cartesian doubt as a pre-eminent methodological tool in his fundamental philosophical investigations, branches of philosophy like logic devote much effort to distinguish the dubious, the probable and the certain. Much of illogic rests on assumptions, dubious data or dubious conclusions, with rhetoric, whitewashing. Doubt that god exist may form the basis of agnosticism — the belief that one cannot determine the existence or non-existence of god. It may also form other brands of skepticism, such as Pyrrhonism, which do not take a stance in regard to the existence of god. Alternatively, doubt over the existence of god may lead to acceptance of a particular religion, Doubt of a specific theology, scriptural or deistic, may bring into question the truth of that theologys set of beliefs. On the other hand, doubt as to some doctrines but acceptance of others may lead to the growth of heresy and/or the splitting off of sects or groups of thought, thus proto-Protestants doubted papal authority, and substituted alternative methods of governance in their new churches
Doubt
–
The Incredulity of Saint Thomas by Caravaggio.
Doubt
–
Doubt
Doubt
–
Doubts, by Henrietta Rae, 1886
5.
Determinism
–
Determinism is the philosophical position that for every event there exist conditions that could cause no other event. There are many determinisms, depending on what pre-conditions are considered to be determinative of an event or action, deterministic theories throughout the history of philosophy have sprung from diverse and sometimes overlapping motives and considerations. Some forms of determinism can be tested with ideas from physics. The opposite of determinism is some kind of indeterminism, Determinism is often contrasted with free will. Determinism often is taken to mean causal determinism, which in physics is known as cause-and-effect and it is the concept that events within a given paradigm are bound by causality in such a way that any state is completely determined by prior states. This meaning can be distinguished from varieties of determinism mentioned below. Numerous historical debates involve many philosophical positions and varieties of determinism and they include debates concerning determinism and free will, technically denoted as compatibilistic and incompatibilistic. Determinism should not be confused with self-determination of human actions by reasons, motives, Determinism rarely requires that perfect prediction be practically possible. However, causal determinism is a broad term to consider that ones deliberations, choices. Causal determinism proposes that there is a chain of prior occurrences stretching back to the origin of the universe. The relation between events may not be specified, nor the origin of that universe, causal determinists believe that there is nothing in the universe that is uncaused or self-caused. Historical determinism can also be synonymous with causal determinism, causal determinism has also been considered more generally as the idea that everything that happens or exists is caused by antecedent conditions. Yet they can also be considered metaphysical of origin. Nomological determinism is the most common form of causal determinism and it is the notion that the past and the present dictate the future entirely and necessarily by rigid natural laws, that every occurrence results inevitably from prior events. Quantum mechanics and various interpretations thereof pose a challenge to this view. Nomological determinism is sometimes illustrated by the experiment of Laplaces demon. Nomological determinism is sometimes called scientific determinism, although that is a misnomer, physical determinism is generally used synonymously with nomological determinism. Necessitarianism is closely related to the causal determinism described above and it is a metaphysical principle that denies all mere possibility, there is exactly one way for the world to be. Leucippus claimed there were no uncaused events, and that occurs for a reason
Determinism
–
Many philosophical theories of determinism frame themselves with the idea that reality follows a sort of predetermined path
Determinism
–
Adequate determinism focuses on the fact that, even without a full understanding of microscopic physics, we can predict the distribution of 1000 coin tosses
Determinism
–
Nature and nurture interact in humans. A scientist looking at a sculpture after some time does not ask whether we are seeing the effects of the starting materials or of environmental influences.
Determinism
–
A technological determinist might suggest that technology like the mobile phone is the greatest factor shaping human civilization.
6.
Hypothesis
–
A hypothesis is a proposed explanation for a phenomenon. For a hypothesis to be a scientific hypothesis, the method requires that one can test it. Scientists generally base scientific hypotheses on previous observations that cannot satisfactorily be explained with the scientific theories. Even though the hypothesis and theory are often used synonymously. A working hypothesis is a provisionally accepted hypothesis proposed for further research, P is the assumption in a What If question. Remember, the way that you prove an implication is by assuming the hypothesis, --Philip Wadler In its ancient usage, hypothesis referred to a summary of the plot of a classical drama. The English word hypothesis comes from the ancient Greek ὑπόθεσις word hupothesis, in Platos Meno, Socrates dissects virtue with a method used by mathematicians, that of investigating from a hypothesis. In this sense, hypothesis refers to an idea or to a convenient mathematical approach that simplifies cumbersome calculations. In common usage in the 21st century, a hypothesis refers to an idea whose merit requires evaluation. For proper evaluation, the framer of a hypothesis needs to define specifics in operational terms, a hypothesis requires more work by the researcher in order to either confirm or disprove it. In due course, a hypothesis may become part of a theory or occasionally may grow to become a theory itself. Normally, scientific hypotheses have the form of a mathematical model, in entrepreneurial science, a hypothesis is used to formulate provisional ideas within a business setting. The formulated hypothesis is then evaluated where either the hypothesis is proven to be true or false through a verifiability- or falsifiability-oriented Experiment, any useful hypothesis will enable predictions by reasoning. It might predict the outcome of an experiment in a setting or the observation of a phenomenon in nature. The prediction may also invoke statistics and only talk about probabilities, other philosophers of science have rejected the criterion of falsifiability or supplemented it with other criteria, such as verifiability or coherence. The scientific method involves experimentation, to test the ability of some hypothesis to adequately answer the question under investigation. In contrast, unfettered observation is not as likely to raise unexplained issues or open questions in science, a thought experiment might also be used to test the hypothesis as well. In framing a hypothesis, the investigator must not currently know the outcome of a test or that it remains reasonably under continuing investigation, only in such cases does the experiment, test or study potentially increase the probability of showing the truth of a hypothesis
Hypothesis
–
Andreas Cellarius hypothesis, demonstrating the planetary motions in eccentric and epicyclical orbits
7.
Uncertainty
–
Uncertainty is a situation which involves imperfect and/or unknown information. However, uncertainty is an expression without a straightforward description. It applies to predictions of events, to physical measurements that are already made. Uncertainty arises in partially observable and/or stochastic environments, as well as due to ignorance and/or indolence, a state of having limited knowledge where it is impossible to exactly describe the existing state, a future outcome, or more than one possible outcome. Risk A state of uncertainty where some possible outcomes have an effect or significant loss. Measurement of risk A set of measured uncertainties where some possible outcomes are losses, and the magnitudes of those losses – this also includes loss functions over continuous variables. It will appear that a measurable uncertainty, or risk proper, if probabilities are applied to the possible outcomes using weather forecasts or even just a calibrated probability assessment, the uncertainty has been quantified. Suppose it is quantified as a 90% chance of sunshine, if there is a major, costly, outdoor event planned for tomorrow then there is a risk since there is a 10% chance of rain, and rain would be undesirable. Furthermore, if this is an event and $100,000 would be lost if it rains. These situations can be even more realistic by quantifying light rain vs. heavy rain. Some may represent the risk in this example as the expected opportunity loss or the chance of the loss multiplied by the amount of the loss and that is useful if the organizer of the event is risk neutral, which most people are not. Most would be willing to pay a premium to avoid the loss, an insurance company, for example, would compute an EOL as a minimum for any insurance coverage, then add onto that other operating costs and profit. Since many people are willing to buy insurance for many reasons, quantitative uses of the terms uncertainty and risk are fairly consistent from fields such as probability theory, actuarial science, and information theory. Some also create new terms without substantially changing the definitions of uncertainty or risk, for example, surprisal is a variation on uncertainty sometimes used in information theory. But outside of the more mathematical uses of the term, usage may vary widely, in cognitive psychology, uncertainty can be real, or just a matter of perception, such as expectations, threats, etc. Vagueness or ambiguity are sometimes described as second order uncertainty, where there is uncertainty even about the definitions of uncertain states or outcomes, the difference here is that this uncertainty is about the human definitions and concepts, not an objective fact of nature. It is usually modelled by some variation on Zadehs fuzzy logic and it has been argued that ambiguity, however, is always avoidable while uncertainty is not necessarily avoidable. Uncertainty may be purely a consequence of a lack of knowledge of obtainable facts and that is, there may be uncertainty about whether a new rocket design will work, but this uncertainty can be removed with further analysis and experimentation
Uncertainty
–
We are frequently presented with situations wherein a decision must be made when we are uncertain of exactly how to proceed.
8.
Epistemology
–
Epistemology is the branch of philosophy concerned with the theory of knowledge. Epistemology studies the nature of knowledge, justification, and the rationality of belief, the term Epistemology was first used by Scottish philosopher James Frederick Ferrier in 1854. However, according to Brett Warren, King James VI of Scotland had previously personified this philosophical concept as the character Epistemon in 1591 and this philosophical approach signified a Philomath seeking to obtain greater knowledge through epistemology with the use of theology. The dialogue was used by King James to educate society on various concepts including the history, the word epistemology is derived from the ancient Greek epistēmē meaning knowledge and the suffix -logy, meaning a logical discourse to. J. F. Ferrier coined epistemology on the model of ontology, to designate that branch of philosophy which aims to discover the meaning of knowledge, and called it the true beginning of philosophy. The word is equivalent to the concept Wissenschaftslehre, which was used by German philosophers Johann Fichte, French philosophers then gave the term épistémologie a narrower meaning as theory of knowledge. Émile Meyerson opened his Identity and Reality, written in 1908, in mathematics, it is known that 2 +2 =4, but there is also knowing how to add two numbers, and knowing a person, place, thing, or activity. Some philosophers think there is an important distinction between knowing that, knowing how, and acquaintance-knowledge, with epistemology being primarily concerned with the first of these, while these distinctions are not explicit in English, they are defined explicitly in other languages. In French, Portuguese, Spanish and Dutch to know is translated using connaître, conhecer, conocer, modern Greek has the verbs γνωρίζω and ξέρω. Italian has the verbs conoscere and sapere and the nouns for knowledge are conoscenza and sapienza, German has the verbs wissen and kennen. The verb itself implies a process, you have to go from one state to another and this verb seems to be the most appropriate in terms of describing the episteme in one of the modern European languages, hence the German name Erkenntnistheorie. The theoretical interpretation and significance of linguistic issues remains controversial. In his paper On Denoting and his later book Problems of Philosophy Bertrand Russell stressed the distinction between knowledge by description and knowledge by acquaintance, gilbert Ryle is also credited with stressing the distinction between knowing how and knowing that in The Concept of Mind. This position is essentially Ryles, who argued that a failure to acknowledge the distinction between knowledge that and knowledge how leads to infinite regress and this includes the truth, and everything else we accept as true for ourselves from a cognitive point of view. Whether someones belief is true is not a prerequisite for belief, on the other hand, if something is actually known, then it categorically cannot be false. It would not be accurate to say that he knew that the bridge was safe, because plainly it was not. By contrast, if the bridge actually supported his weight, then he might say that he had believed that the bridge was safe, whereas now, after proving it to himself, epistemologists argue over whether belief is the proper truth-bearer. Some would rather describe knowledge as a system of justified true propositions, plato, in his Gorgias, argues that belief is the most commonly invoked truth-bearer
Epistemology
–
Plato – Kant – Nietzsche
9.
Measure (mathematics)
–
In mathematical analysis, a measure on a set is a systematic way to assign a number to each suitable subset of that set, intuitively interpreted as its size. In this sense, a measure is a generalization of the concepts of length, area, for instance, the Lebesgue measure of the interval in the real numbers is its length in the everyday sense of the word – specifically,1. Technically, a measure is a function that assigns a real number or +∞ to subsets of a set X. It must further be countably additive, the measure of a subset that can be decomposed into a finite number of smaller disjoint subsets, is the sum of the measures of the smaller subsets. In general, if one wants to associate a consistent size to each subset of a set while satisfying the other axioms of a measure. This problem was resolved by defining measure only on a sub-collection of all subsets, the so-called measurable subsets and this means that countable unions, countable intersections and complements of measurable subsets are measurable. Non-measurable sets in a Euclidean space, on which the Lebesgue measure cannot be defined consistently, are complicated in the sense of being badly mixed up with their complement. Indeed, their existence is a consequence of the axiom of choice. Measure theory was developed in stages during the late 19th and early 20th centuries by Émile Borel, Henri Lebesgue, Johann Radon. The main applications of measures are in the foundations of the Lebesgue integral, in Andrey Kolmogorovs axiomatisation of probability theory, probability theory considers measures that assign to the whole set the size 1, and considers measurable subsets to be events whose probability is given by the measure. Ergodic theory considers measures that are invariant under, or arise naturally from, let X be a set and Σ a σ-algebra over X. A function μ from Σ to the real number line is called a measure if it satisfies the following properties, Non-negativity. Countable additivity, For all countable collections i =1 ∞ of pairwise disjoint sets in Σ, μ = ∑ k =1 ∞ μ One may require that at least one set E has finite measure. Then the empty set automatically has measure zero because of countable additivity, because μ = μ = μ + μ + μ + …, which implies that μ =0. If only the second and third conditions of the definition of measure above are met, the pair is called a measurable space, the members of Σ are called measurable sets. If and are two spaces, then a function f, X → Y is called measurable if for every Y-measurable set B ∈ Σ Y. A triple is called a measure space, a probability measure is a measure with total measure one – i. e. A probability space is a space with a probability measure
Measure (mathematics)
–
Informally, a measure has the property of being monotone in the sense that if A is a subset of B, the measure of A is less than or equal to the measure of B. Furthermore, the measure of the empty set is required to be 0.
10.
Mathematics
–
Mathematics is the study of topics such as quantity, structure, space, and change. There is a range of views among mathematicians and philosophers as to the exact scope, Mathematicians seek out patterns and use them to formulate new conjectures. Mathematicians resolve the truth or falsity of conjectures by mathematical proof, when mathematical structures are good models of real phenomena, then mathematical reasoning can provide insight or predictions about nature. Through the use of abstraction and logic, mathematics developed from counting, calculation, measurement, practical mathematics has been a human activity from as far back as written records exist. The research required to solve mathematical problems can take years or even centuries of sustained inquiry, rigorous arguments first appeared in Greek mathematics, most notably in Euclids Elements. Galileo Galilei said, The universe cannot be read until we have learned the language and it is written in mathematical language, and the letters are triangles, circles and other geometrical figures, without which means it is humanly impossible to comprehend a single word. Without these, one is wandering about in a dark labyrinth, carl Friedrich Gauss referred to mathematics as the Queen of the Sciences. Benjamin Peirce called mathematics the science that draws necessary conclusions, David Hilbert said of mathematics, We are not speaking here of arbitrariness in any sense. Mathematics is not like a game whose tasks are determined by arbitrarily stipulated rules, rather, it is a conceptual system possessing internal necessity that can only be so and by no means otherwise. Albert Einstein stated that as far as the laws of mathematics refer to reality, they are not certain, Mathematics is essential in many fields, including natural science, engineering, medicine, finance and the social sciences. Applied mathematics has led to entirely new mathematical disciplines, such as statistics, Mathematicians also engage in pure mathematics, or mathematics for its own sake, without having any application in mind. There is no clear line separating pure and applied mathematics, the history of mathematics can be seen as an ever-increasing series of abstractions. The earliest uses of mathematics were in trading, land measurement, painting and weaving patterns, in Babylonian mathematics elementary arithmetic first appears in the archaeological record. Numeracy pre-dated writing and numeral systems have many and diverse. Between 600 and 300 BC the Ancient Greeks began a study of mathematics in its own right with Greek mathematics. Mathematics has since been extended, and there has been a fruitful interaction between mathematics and science, to the benefit of both. Mathematical discoveries continue to be made today, the overwhelming majority of works in this ocean contain new mathematical theorems and their proofs. The word máthēma is derived from μανθάνω, while the modern Greek equivalent is μαθαίνω, in Greece, the word for mathematics came to have the narrower and more technical meaning mathematical study even in Classical times
Mathematics
–
Euclid (holding calipers), Greek mathematician, 3rd century BC, as imagined by Raphael in this detail from The School of Athens.
Mathematics
–
Greek mathematician Pythagoras (c. 570 – c. 495 BC), commonly credited with discovering the Pythagorean theorem
Mathematics
–
Leonardo Fibonacci, the Italian mathematician who established the Hindu–Arabic numeral system to the Western World
Mathematics
–
Carl Friedrich Gauss, known as the prince of mathematicians
11.
Science
–
Science is a systematic enterprise that builds and organizes knowledge in the form of testable explanations and predictions about the universe. The formal sciences are often excluded as they do not depend on empirical observations, disciplines which use science, like engineering and medicine, may also be considered to be applied sciences. However, during the Islamic Golden Age foundations for the method were laid by Ibn al-Haytham in his Book of Optics. In the 17th and 18th centuries, scientists increasingly sought to formulate knowledge in terms of physical laws, over the course of the 19th century, the word science became increasingly associated with the scientific method itself as a disciplined way to study the natural world. It was during this time that scientific disciplines such as biology, chemistry, Science in a broad sense existed before the modern era and in many historical civilizations. Modern science is distinct in its approach and successful in its results, Science in its original sense was a word for a type of knowledge rather than a specialized word for the pursuit of such knowledge. In particular, it was the type of knowledge which people can communicate to each other, for example, knowledge about the working of natural things was gathered long before recorded history and led to the development of complex abstract thought. This is shown by the construction of calendars, techniques for making poisonous plants edible. For this reason, it is claimed these men were the first philosophers in the strict sense and they were mainly speculators or theorists, particularly interested in astronomy. In contrast, trying to use knowledge of nature to imitate nature was seen by scientists as a more appropriate interest for lower class artisans. A clear-cut distinction between formal and empirical science was made by the pre-Socratic philosopher Parmenides, although his work Peri Physeos is a poem, it may be viewed as an epistemological essay on method in natural science. Parmenides ἐὸν may refer to a system or calculus which can describe nature more precisely than natural languages. Physis may be identical to ἐὸν and he criticized the older type of study of physics as too purely speculative and lacking in self-criticism. He was particularly concerned that some of the early physicists treated nature as if it could be assumed that it had no intelligent order, explaining things merely in terms of motion and matter. The study of things had been the realm of mythology and tradition, however. Aristotle later created a less controversial systematic programme of Socratic philosophy which was teleological and he rejected many of the conclusions of earlier scientists. For example, in his physics, the sun goes around the earth, each thing has a formal cause and final cause and a role in the rational cosmic order. Motion and change is described as the actualization of potentials already in things, while the Socratics insisted that philosophy should be used to consider the practical question of the best way to live for a human being, they did not argue for any other types of applied science
Science
–
Maize, known in some English-speaking countries as corn, is a large grain plant domesticated by indigenous peoples in Mesoamerica in prehistoric times.
Science
–
The scale of the universe mapped to the branches of science and the hierarchy of science.
Science
–
Aristotle, 384 BC – 322 BC, - one of the early figures in the development of the scientific method.
Science
–
Galen (129—c.216) noted the optic chiasm is X-shaped. (Engraving from Vesalius, 1543)
12.
Machine learning
–
Machine learning is the subfield of computer science that, according to Arthur Samuel in 1959, gives computers the ability to learn without being explicitly programmed. Machine learning is related to computational statistics, which also focuses on prediction-making through the use of computers. It has strong ties to optimization, which delivers methods, theory. Machine learning is sometimes conflated with data mining, where the latter subfield focuses more on data analysis and is known as unsupervised learning. Machine learning can also be unsupervised and be used to learn and establish baseline behavioral profiles for various entities, tom M. be replaced with the question Can machines do what we can do. In the proposal he explores the characteristics that could be possessed by a thinking machine. Machine learning tasks are typically classified into three categories, depending on the nature of the learning signal or feedback available to a learning system. These are Supervised learning, The computer is presented with example inputs and their outputs, given by a teacher. Unsupervised learning, No labels are given to the learning algorithm, unsupervised learning can be a goal in itself or a means towards an end. Reinforcement learning, A computer program interacts with an environment in which it must perform a certain goal. The program is provided feedback in terms of rewards and punishments as it navigates its problem space, between supervised and unsupervised learning is semi-supervised learning, where the teacher gives an incomplete training signal, a training set with some of the target outputs missing. Transduction is a case of this principle where the entire set of problem instances is known at learning time. Among other categories of machine learning problems, learning to learn learns its own inductive bias based on previous experience and this is typically tackled in a supervised way. Spam filtering is an example of classification, where the inputs are email messages, in regression, also a supervised problem, the outputs are continuous rather than discrete. In clustering, a set of inputs is to be divided into groups, unlike in classification, the groups are not known beforehand, making this typically an unsupervised task. Density estimation finds the distribution of inputs in some space, dimensionality reduction simplifies inputs by mapping them into a lower-dimensional space. Topic modeling is a problem, where a program is given a list of human language documents and is tasked to find out which documents cover similar topics. As a scientific endeavour, machine learning grew out of the quest for artificial intelligence, already in the early days of AI as an academic discipline, some researchers were interested in having machines learn from data
Machine learning
–
Machine learning and data mining
13.
Philosophy
–
Philosophy is the study of general and fundamental problems concerning matters such as existence, knowledge, values, reason, mind, and language. The term was coined by Pythagoras. Philosophical methods include questioning, critical discussion, rational argument and systematic presentation, classic philosophical questions include, Is it possible to know anything and to prove it. However, philosophers might also pose more practical and concrete questions such as, is it better to be just or unjust. Historically, philosophy encompassed any body of knowledge, from the time of Ancient Greek philosopher Aristotle to the 19th century, natural philosophy encompassed astronomy, medicine and physics. For example, Newtons 1687 Mathematical Principles of Natural Philosophy later became classified as a book of physics, in the 19th century, the growth of modern research universities led academic philosophy and other disciplines to professionalize and specialize. In the modern era, some investigations that were part of philosophy became separate academic disciplines, including psychology, sociology. Other investigations closely related to art, science, politics, or other pursuits remained part of philosophy, for example, is beauty objective or subjective. Are there many scientific methods or just one, is political utopia a hopeful dream or hopeless fantasy. Major sub-fields of academic philosophy include metaphysics, epistemology, ethics, aesthetics, political philosophy, logic, philosophy of science, since the 20th century, professional philosophers contribute to society primarily as professors, researchers and writers. Traditionally, the term referred to any body of knowledge. In this sense, philosophy is related to religion, mathematics, natural science, education. This division is not obsolete but has changed, Natural philosophy has split into the various natural sciences, especially astronomy, physics, chemistry, biology and cosmology. Moral philosophy has birthed the social sciences, but still includes value theory, metaphysical philosophy has birthed formal sciences such as logic, mathematics and philosophy of science, but still includes epistemology, cosmology and others. Many philosophical debates that began in ancient times are still debated today, colin McGinn and others claim that no philosophical progress has occurred during that interval. Chalmers and others, by contrast, see progress in philosophy similar to that in science, in one general sense, philosophy is associated with wisdom, intellectual culture and a search for knowledge. In that sense, all cultures and literate societies ask philosophical questions such as how are we to live, a broad and impartial conception of philosophy then, finds a reasoned inquiry into such matters as reality, morality and life in all world civilizations. Socrates was an influential philosopher, who insisted that he possessed no wisdom but was a pursuer of wisdom
Philosophy
–
René Descartes
Philosophy
–
Thomas Aquinas
Philosophy
–
Jeremy Bentham
Philosophy
–
Thomas Hobbes
14.
Random
–
Randomness is the lack of pattern or predictability in events. A random sequence of events, symbols or steps has no order, individual random events are by definition unpredictable, but in many cases the frequency of different outcomes over a large number of events is predictable. For example, when throwing two dice, the outcome of any particular roll is unpredictable, but a sum of 7 will occur twice as often as 4. In this view, randomness is a measure of uncertainty of an outcome, rather than haphazardness, and applies to concepts of chance, probability, the fields of mathematics, probability, and statistics use formal definitions of randomness. In statistics, a variable is an assignment of a numerical value to each possible outcome of an event space. This association facilitates the identification and the calculation of probabilities of the events, Random variables can appear in random sequences. A random process is a sequence of variables whose outcomes do not follow a deterministic pattern. These and other constructs are extremely useful in probability theory and the applications of randomness. Randomness is most often used in statistics to signify well-defined statistical properties, Monte Carlo methods, which rely on random input, are important techniques in science, as, for instance, in computational science. By analogy, quasi-Monte Carlo methods use quasirandom number generators, Random selection is a method of selecting items from a population where the probability of choosing a specific item is the proportion of those items in the population. For example, with a bowl containing just 10 red marbles and 90 blue marbles, note that a random selection mechanism that selected 10 marbles from this bowl would not necessarily result in 1 red and 9 blue. In situations where a population consists of items that are distinguishable and that is, if the selection process is such that each member of a population, of say research subjects, has the same probability of being chosen then we can say the selection process is random. In ancient history, the concepts of chance and randomness were intertwined with that of fate, many ancient peoples threw dice to determine fate, and this later evolved into games of chance. Most ancient cultures used various methods of divination to attempt to circumvent randomness, the Chinese of 3000 years ago were perhaps the earliest people to formalize odds and chance. The Greek philosophers discussed randomness at length, but only in non-quantitative forms and it was only in the 16th century that Italian mathematicians began to formalize the odds associated with various games of chance. The invention of the calculus had a impact on the formal study of randomness. The early part of the 20th century saw a growth in the formal analysis of randomness. In the mid- to late-20th century, ideas of information theory introduced new dimensions to the field via the concept of algorithmic randomness
Random
–
Ancient fresco of dice players in Pompei.
Random
–
A pseudorandomly generated bitmap.
Random
–
The ball in a roulette can be used as a source of apparent randomness, because its behavior is very sensitive to the initial conditions.
15.
Likelihood function
–
In statistics, a likelihood function is a function of the parameters of a statistical model given data. Likelihood functions play a key role in statistical inference, especially methods of estimating a parameter from a set of statistics, in informal contexts, likelihood is often used as a synonym for probability. In statistics, a distinction is made depending on the roles of outcomes vs. parameters, Probability is used before data are available to describe possible future outcomes given a fixed value for the parameter. Likelihood is used data are available to describe a function of a parameter for a given outcome. The likelihood of a value, θ, given outcomes x, is equal to the probability assumed for those observed outcomes given those parameter values. The likelihood function is defined differently for discrete and continuous probability distributions, let X be a random variable with a discrete probability distribution p depending on a parameter θ. Then the function L = p θ = P θ considered as a function of θ, is called the likelihood function, let X be a random variable following an absolutely continuous probability distribution with density function f depending on a parameter θ. Then the function L = f θ, considered as a function of θ, is called the likelihood function. Sometimes the density function for the x of X for the parameter value θ is written as f. This provides a function for any probability model with all distributions, whether discrete, absolutely continuous. For many applications, the logarithm of the likelihood function. For example, some functions are for the parameters that explain a collection of statistically independent observations. In such a situation, the likelihood function factors into a product of individual likelihood functions, the logarithm of this product is a sum of individual logarithms, and the derivative of a sum of terms is often easier to compute than the derivative of a product. In addition, several common distributions have likelihood functions that contain products of factors involving exponentiation, the logarithm of such a function is a sum of products, again easier to differentiate than the original function. Edwards established the basis for use of the log-likelihood ratio as a measure of relative support for one hypothesis against another. The support function is then the logarithm of the likelihood function. Both terms are used in phylogenetics but were not adopted in a treatment of the topic of statistical evidence. The gamma distribution has two parameters α and β, the likelihood function is L = β α Γ x α −1 e − β x
Likelihood function
–
The likelihood function for estimating the probability of a coin landing heads-up without prior knowledge after observing HH
16.
History of probability
–
While statistics deals with data and inferences from it, probability deals with the stochastic processes which lie behind data or outcomes. The mathematical sense of the term is from 1718, in the 18th century, the term chance was also used in the mathematical sense of probability. This word is ultimately from Latin cadentia, i. e. a fall, similarly, the derived noun likelihood had a meaning of similarity, resemblance but took on a meaning of probability from the mid 15th century. Ancient and medieval law of evidence developed a grading of degrees of proof, probabilities, presumptions, christiaan Huygens gave a comprehensive treatment of the subject. From Games, Gods and Gambling ISBN 978-0-85264-171-2 by F. N. David, In ancient times there were played using astragali. The Pottery of ancient Greece was evidence to show there was a circle drawn on the floor. In Egypt, excavators of tombs found a game they called Hounds and Jackals and it seems that this is the early stages of the creation of dice. First dice game mentioned in literature of the Christian era was called Hazard, thought to have been brought to Europe by the knights returning from the Crusades. A commentor of Dante puts further thought into this game, the thought was that with 3 dice, the lowest number you can get is 3, achieving a 4 can be done with 3 die by having a two on one die and aces on the other two dice. Cardano also thought about the throwing of three die,3 dice are thrown, there are the same number of ways to throw a 9 as there are a 10. From this, Cardano found that the probability of throwing a 9 is less than that of throwing a 10 and he also demonstrated the efficacy of defining odds as the ratio of favourable to unfavourable outcomes. In addition, the famous Galileo wrote about die-throwing sometime between 1613 and 1623, essentially thought about Cardanos problem, about the probability of throwing a 9 is less than throwing a 10. Galileo had the following to say, Certain numbers have the ability to be thrown because there are ways to create that number. Although 9 and 10 have the number of ways to be created,10 is considered by dice players to be more common than 9. Jacob Bernoullis Ars Conjectandi and Abraham de Moivres The Doctrine of Chances put probability on a sound footing, showing how to calculate a wide range of complex probabilities. The power of probabilistic methods in dealing with uncertainty was shown by Gausss determination of the orbit of Ceres from a few observations. The field of the history of probability itself was established by Isaac Todhunters monumental A History of the Mathematical Theory of Probability from the Time of Pascal to that of Laplace. A hypothesis, for example that a drug is usually effective, if observations approximately agree with the hypothesis, it is confirmed, if not, the hypothesis is rejected
History of probability
17.
Witness
–
A witness is someone who has, who claims to have, or is thought, by someone with authority to compel testimony, to have knowledge relevant to an event or other matter of interest. A percipient witness or eyewitness is one who testifies what they perceived through his or her senses, a hearsay witness is one who testifies what someone else said or wrote. In most court proceedings there are limitations on when hearsay evidence is admissible. Such limitations do not apply to grand jury investigations, many administrative proceedings, also some types of statements are not deemed to be hearsay and are not subject to such limitations. An expert witness may or may not also be a percipient witness, a reputation witness is one who testifies about the reputation of a person or business entity, when reputation is material to the dispute at issue. Sometimes the testimony is provided in public or in a confidential setting, although informally a witness includes whoever perceived the event, in law, a witness is different from an informant. A confidential informant is someone who claimed to have witnessed an event or have hearsay information, the information from the confidential informant may have been used by a police officer or other official acting as a hearsay witness to obtain a search warrant. A subpoena commands a person to appear and it is used to compel the testimony of a witness in a trial. Usually, it can be issued by a judge or by the representing the plaintiff or the defendant in a civil trial or by the prosecutor or the defense attorney in a criminal proceeding. In many jurisdictions, it is compulsory to comply, to take an oath, in a court proceeding, a witness may be called by either the prosecution or the defense. The side that calls the witness first asks questions, in what is called direct examination, the opposing side then may ask their own questions in what is called cross-examination. In some cases, redirect examination may then be used by the side called the witness. Recalling a witness means calling a witness, who has given testimony in a proceeding. Witness are usually permitted to testify to what they experienced first hand. In most cases, they may not testify about something they were told and this restriction does not apply to expert witnesses. Expert witnesses, however, may only testify in the area of their expertise, Eyewitness testimony is generally presumed to be more reliable than circumstantial evidence. Studies have shown, however, that individual, separate witness testimony is often flawed and this can occur because of flaws in Eyewitness identification, or because a witness is lying. One study involved an experiment, in which subjects acted as jurors in a criminal case, jurors heard a description of a robbery-murder, then a prosecution argument, and then an argument for the defense
Witness
–
Heinrich Buscher (de) as a witness during the Nuremberg Trials
18.
Gerolamo Cardano
–
He wrote more than 200 works on science. He made significant contributions to hypocycloids, published in De proportionibus, the generating circles of these hypocycloids were later named Cardano circles or cardanic circles and were used for the construction of the first high-speed printing presses. Today, he is known for his achievements in algebra. He was born in Pavia, Lombardy, the child of Fazio Cardano, a mathematically gifted jurist, lawyer. In his autobiography, Cardano wrote that his mother, Chiara Micheri, had taken various abortive medicines to terminate the pregnancy, he was taken by violent means from my mother and she was in labour for three days. Shortly before his birth, his mother had to move from Milan to Pavia to escape the Plague and his eccentric and confrontational style did not earn him many friends and he had a difficult time finding work after his studies had ended. In 1525, Cardano repeatedly applied to the College of Physicians in Milan and he suffered from impotence throughout the early part of his life, but recovered and married Lucia Banderini in 1531. Before her death in 1546, she bore him three children, Giovanni Battista, Chiara and Aldo, Cardano was the first mathematician to make systematic use of numbers less than zero. He published with attribution the solution of Scipione del Ferro to the cubic equation and the solution of his student Lodovico Ferrari to the quartic equation in his 1545 book Ars Magna. The solution to one case of the cubic equation a x 3 + b x + c =0, had been communicated to him by Niccolò Fontana Tartaglia in the form of a poem. In Opus novum de proportionibus he introduced the binomial coefficients and the binomial theorem, Cardano was notoriously short of money and kept himself solvent by being an accomplished gambler and chess player. He used the game of throwing dice to understand the concepts of probability. He demonstrated the efficacy of defining odds as the ratio of favourable to unfavourable outcomes and he was also aware of the multiplication rule for independent events but was not certain about what values should be multiplied. Cardano made several contributions to hydrodynamics and held that perpetual motion is impossible and he published two encyclopedias of natural science which contain a wide variety of inventions, facts, and occult superstitions. He also introduced the Cardan grille, a tool, in 1550. Someone also assigned to Cardano the credit for the invention of the so-called Cardanos Rings, also called Chinese Rings and he was familiar with a report by Rudolph Agricola about a deaf mute who had learned to write. Two of Cardanos children—Giovanni and Aldo Battista—came to ignoble ends, Giovanni Battista, Cardanos eldest and favorite son, was tried and beheaded in 1560 for poisoning his wife, after he discovered that their three children were not his. Aldo Battista was a gambler, who stole money from his father and was disinherited by him in 1569, Cardano was arrested by the Inquisition in 1570 for unknown reasons, and forced to spend several months in prison and abjure his professorship
Gerolamo Cardano
–
Gerolamo Cardano
Gerolamo Cardano
–
De propria vita, 1821
Gerolamo Cardano
–
Portrait of Cardano on display at the School of Mathematics and Statistics, University of St Andrews.
Gerolamo Cardano
–
A portrait of Gerolamo Cardano
19.
Pierre de Fermat
–
He made notable contributions to analytic geometry, probability, and optics. He is best known for his Fermats principle for light propagation and his Fermats Last Theorem in number theory, Fermat was born in the first decade of the 17th century in Beaumont-de-Lomagne, France—the late 15th-century mansion where Fermat was born is now a museum. He was from Gascony, where his father, Dominique Fermat, was a leather merchant. Pierre had one brother and two sisters and was almost certainly brought up in the town of his birth, there is little evidence concerning his school education, but it was probably at the Collège de Navarre in Montauban. He attended the University of Orléans from 1623 and received a bachelor in law in 1626. In Bordeaux he began his first serious mathematical researches, and in 1629 he gave a copy of his restoration of Apolloniuss De Locis Planis to one of the mathematicians there, there he became much influenced by the work of François Viète. In 1630, he bought the office of a councillor at the Parlement de Toulouse, one of the High Courts of Judicature in France and he held this office for the rest of his life. Fermat thereby became entitled to change his name from Pierre Fermat to Pierre de Fermat, fluent in six languages, Fermat was praised for his written verse in several languages and his advice was eagerly sought regarding the emendation of Greek texts. He communicated most of his work in letters to friends, often little or no proof of his theorems. In some of these letters to his friends he explored many of the ideas of calculus before Newton or Leibniz. Fermat was a trained lawyer making mathematics more of a hobby than a profession, nevertheless, he made important contributions to analytical geometry, probability, number theory and calculus. Secrecy was common in European mathematical circles at the time and this naturally led to priority disputes with contemporaries such as Descartes and Wallis. Anders Hald writes that, The basis of Fermats mathematics was the classical Greek treatises combined with Vietas new algebraic methods, Fermats pioneering work in analytic geometry was circulated in manuscript form in 1636, predating the publication of Descartes famous La géométrie. This manuscript was published posthumously in 1679 in Varia opera mathematica, in these works, Fermat obtained a technique for finding the centers of gravity of various plane and solid figures, which led to his further work in quadrature. Fermat was the first person known to have evaluated the integral of power functions. With his method, he was able to reduce this evaluation to the sum of geometric series, the resulting formula was helpful to Newton, and then Leibniz, when they independently developed the fundamental theorem of calculus. In number theory, Fermat studied Pells equation, perfect numbers, amicable numbers and it was while researching perfect numbers that he discovered Fermats little theorem. Fermat developed the two-square theorem, and the polygonal number theorem, although Fermat claimed to have proved all his arithmetic theorems, few records of his proofs have survived
Pierre de Fermat
–
Pierre de Fermat
Pierre de Fermat
–
Bust in the Salle des Illustres in Capitole de Toulouse
Pierre de Fermat
–
Place of burial of Pierre de Fermat in Place Jean Jaurés, Castres. Translation of the plaque: in this place was buried on January 13, 1665, Pierre de Fermat, councilor of the chamber of Edit [Parlement of Toulouse] and mathematician of great renown, celebrated for his theorem, a n + b n ≠ c n for n>2
Pierre de Fermat
–
Holographic will handwritten by Fermat on 4 March 1660 — kept at the Departmental Archives of Haute-Garonne, in Toulouse
20.
Blaise Pascal
–
Blaise Pascal was a French mathematician, physicist, inventor, writer and Christian philosopher. He was a prodigy who was educated by his father. Pascal also wrote in defence of the scientific method, in 1642, while still a teenager, he started some pioneering work on calculating machines. After three years of effort and 50 prototypes, he built 20 finished machines over the following 10 years, following Galileo Galilei and Torricelli, in 1647, he rebutted Aristotles followers who insisted that nature abhors a vacuum. Pascals results caused many disputes before being accepted, in 1646, he and his sister Jacqueline identified with the religious movement within Catholicism known by its detractors as Jansenism. Following a religious experience in late 1654, he began writing works on philosophy. His two most famous works date from this period, the Lettres provinciales and the Pensées, the set in the conflict between Jansenists and Jesuits. In that year, he wrote an important treatise on the arithmetical triangle. Between 1658 and 1659 he wrote on the cycloid and its use in calculating the volume of solids, Pascal had poor health, especially after the age of 18, and he died just two months after his 39th birthday. Pascal was born in Clermont-Ferrand, which is in Frances Auvergne region and he lost his mother, Antoinette Begon, at the age of three. His father, Étienne Pascal, who also had an interest in science and mathematics, was a local judge, Pascal had two sisters, the younger Jacqueline and the elder Gilberte. In 1631, five years after the death of his wife, the newly arrived family soon hired Louise Delfault, a maid who eventually became an instrumental member of the family. Étienne, who never remarried, decided that he alone would educate his children, for they all showed extraordinary intellectual ability, the young Pascal showed an amazing aptitude for mathematics and science. Particularly of interest to Pascal was a work of Desargues on conic sections and it states that if a hexagon is inscribed in a circle then the three intersection points of opposite sides lie on a line. Pascals work was so precocious that Descartes was convinced that Pascals father had written it, in France at that time offices and positions could be—and were—bought and sold. In 1631 Étienne sold his position as president of the Cour des Aides for 65,665 livres. The money was invested in a government bond which provided, if not a lavish, then certainly a comfortable income which allowed the Pascal family to move to, but in 1638 Richelieu, desperate for money to carry on the Thirty Years War, defaulted on the governments bonds. Suddenly Étienne Pascals worth had dropped from nearly 66,000 livres to less than 7,300 and it was only when Jacqueline performed well in a childrens play with Richelieu in attendance that Étienne was pardoned
Blaise Pascal
–
Painting of Blaise Pascal made by François II Quesnel for Gérard Edelinck in 1691.
Blaise Pascal
–
An early Pascaline on display at the Musée des Arts et Métiers, Paris
Blaise Pascal
–
Portrait of Pascal
Blaise Pascal
–
Pascal studying the cycloid, by Augustin Pajou, 1785, Louvre
21.
Christiaan Huygens
–
Christiaan Huygens, FRS was a prominent Dutch mathematician and scientist. He is known particularly as an astronomer, physicist, probabilist and horologist, Huygens was a leading scientist of his time. His work included early telescopic studies of the rings of Saturn and the discovery of its moon Titan and he published major studies of mechanics and optics, and pioneered work on games of chance. Christiaan Huygens was born on 14 April 1629 in The Hague, into a rich and influential Dutch family, Christiaan was named after his paternal grandfather. His mother was Suzanna van Baerle and she died in 1637, shortly after the birth of Huygens sister. The couple had five children, Constantijn, Christiaan, Lodewijk, Philips, Constantijn Huygens was a diplomat and advisor to the House of Orange, and also a poet and musician. His friends included Galileo Galilei, Marin Mersenne and René Descartes, Huygens was educated at home until turning sixteen years old. He liked to play with miniatures of mills and other machines and his father gave him a liberal education, he studied languages and music, history and geography, mathematics, logic and rhetoric, but also dancing, fencing and horse riding. In 1644 Huygens had as his mathematical tutor Jan Jansz de Jonge Stampioen, Descartes was impressed by his skills in geometry. His father sent Huygens to study law and mathematics at the University of Leiden, Frans van Schooten was an academic at Leiden from 1646, and also a private tutor to Huygens and his elder brother, replacing Stampioen on the advice of Descartes. Van Schooten brought his mathematical education up to date, in introducing him to the work of Fermat on differential geometry. Constantijn Huygens was closely involved in the new College, which lasted only to 1669, Christiaan Huygens lived at the home of the jurist Johann Henryk Dauber, and had mathematics classes with the English lecturer John Pell. He completed his studies in August 1649 and he then had a stint as a diplomat on a mission with Henry, Duke of Nassau. It took him to Bentheim, then Flensburg and he took off for Denmark, visited Copenhagen and Helsingør, and hoped to cross the Øresund to visit Descartes in Stockholm. While his father Constantijn had wished his son Christiaan to be a diplomat, in political terms, the First Stadtholderless Period that began in 1650 meant that the House of Orange was not in power, removing Constantijns influence. Further, he realised that his son had no interest in such a career, Huygens generally wrote in French or Latin. While still a student at Leiden he began a correspondence with the intelligencer Mersenne. Mersenne wrote to Constantijn on his sons talent for mathematics, the letters show the early interests of Huygens in mathematics
Christiaan Huygens
–
Christiaan Huygens by Bernard Vaillant, Museum Hofwijck, Voorburg
Christiaan Huygens
–
Correspondance
Christiaan Huygens
–
The catenary in a manuscript of Huygens.
Christiaan Huygens
–
Christiaan Huygens, relief by Jean-Jacques Clérion, around 1670?
22.
Ars Conjectandi
–
Ars Conjectandi is a book on combinatorics and mathematical probability written by Jacob Bernoulli and published in 1713, eight years after his death, by his nephew, Niklaus Bernoulli. The importance of early work had a large impact on both contemporary and later mathematicians, for example, Abraham de Moivre. Bernoulli wrote the text between 1684 and 1689, including the work of such as Christiaan Huygens, Gerolamo Cardano, Pierre de Fermat. Core topics from probability, such as expected value, were also a significant portion of important work. However, his influence on mathematical scene was not great, he wrote only one light tome on the subject in 1525 titled Liber de ludo aleae. In 1665 Pascal posthumously published his results on the eponymous Pascals triangle and he referred to the triangle in his work Traité du triangle arithmétique as the arithmetic triangle. In 1662, the book La Logique ou l’Art de Penser was published anonymously in Paris, the authors presumably were Antoine Arnauld and Pierre Nicole, two leading Jansenists, who worked together with Blaise Pascal. The Latin title of book is Ars cogitandi, which was a successful book on logic of the time. In the field of statistics and applied probability, John Graunt published Natural and Political Observations Made upon the Bills of Mortality also in 1662, De Witts work was not widely distributed beyond the Dutch Republic, perhaps due to his fall from power and execution by mob in 1672. Thus probability could be more than mere combinatorics, in the wake of all these pioneers, Bernoulli produced much of the results contained in Ars Conjectandi between 1684 and 1689, which he recorded in his diary Meditationes. The latter, however, did manage to provide Pascals and Huygens work, bernoulli’s progress over time can be pursued by means of the Meditationes. Three working periods with respect to his discovery can be distinguished by aims, finally, in the last period, the problem of measuring the probabilities is solved. Before the publication of his Ars Conjectandi, Bernoulli had produced a number of related to probability, Parallelismus ratiocinii logici et algebraici. In the Journal des Sçavans 1685, p.314 there appear two problems concerning the probability each of two players may have of winning in a game of dice. Solutions were published in the Acta Eruditorum 1690, pp. 219–223 in the article Quaestiones nonnullae de usuris, in addition, Leibniz himself published a solution in the same journal on pages 387-390. Theses logicae de conversione et oppositione enunciationum, a lecture delivered at Basel,12 February 1686. Theses XXXI to XL are related to the theory of probability, De Arte Combinatoria Oratio Inauguralis,1692. The Letter à un amy sur les parties du jeu de paume, that is, between 1703 and 1705, Leibniz corresponded with Jakob after learning about his discoveries in probability from his brother Johann
Ars Conjectandi
–
Christiaan Huygens published the first treaties on probability
Ars Conjectandi
–
The cover page of Ars Conjectandi
Ars Conjectandi
–
Portrait of Jakob Bernoulli in 1687
Ars Conjectandi
–
Abraham de Moivre's work was built in part on Bernoulli's
23.
Abraham de Moivre
–
Abraham de Moivre was a French mathematician known for de Moivres formula, a formula that links complex numbers and trigonometry, and for his work on the normal distribution and probability theory. He was a friend of Isaac Newton, Edmond Halley, even though he faced religious persecution he remained a steadfast Christian throughout his life. Among his fellow Huguenot exiles in England, he was a colleague of the editor and translator Pierre des Maizeaux, De Moivre wrote a book on probability theory, The Doctrine of Chances, said to have been prized by gamblers. De Moivre first discovered Binets formula, the expression for Fibonacci numbers linking the nth power of the golden ratio φ to the nth Fibonacci number. He also was the first to postulate the central limit theorem, Abraham de Moivre was born in Vitry-le-François in Champagne on May 26,1667. His father, Daniel de Moivre, was a surgeon who believed in the value of education, though Abraham de Moivres parents were Protestant, he first attended Christian Brothers Catholic school in Vitry, which was unusually tolerant given religious tensions in France at the time. When he was eleven, his parents sent him to the Protestant Academy at Sedan, the Protestant Academy of Sedan had been founded in 1579 at the initiative of Françoise de Bourbon, the widow of Henri-Robert de la Marck. In 1682 the Protestant Academy at Sedan was suppressed, and de Moivre enrolled to study logic at Saumur for two years, in 1684, de Moivre moved to Paris to study physics, and for the first time had formal mathematics training with private lessons from Jacques Ozanam. It forbade Protestant worship and required all children be baptized by Catholic priests. De Moivre was sent to the Prieure de Saint-Martin, a school that the authorities sent Protestant children to for indoctrination into Catholicism, by the time he arrived in London, de Moivre was a competent mathematician with a good knowledge of many of the standard texts. To make a living, de Moivre became a tutor of mathematics. De Moivre continued his studies of mathematics after visiting the Earl of Devonshire and seeing Newtons recent book, looking through the book, he realized that it was far deeper than the books that he had studied previously, and he became determined to read and understand it. By 1692, de Moivre became friends with Edmond Halley and soon after with Isaac Newton himself, in 1695, Halley communicated de Moivres first mathematics paper, which arose from his study of fluxions in the Principia Mathematica, to the Royal Society. This paper was published in the Philosophical Transactions that same year, shortly after publishing this paper, de Moivre also generalized Newtons noteworthy binomial theorem into the multinomial theorem. The Royal Society became apprised of this method in 1697, after de Moivre had been accepted, Halley encouraged him to turn his attention to astronomy. The mathematician Johann Bernoulli proved this formula in 1710, at least a part of the reason was a bias against his French origins. In November 1697 he was elected a Fellow of the Royal Society and in 1712 was appointed to a set up by the society. Arbuthnot, Hill, Halley, Jones, Machin, Burnet, Robarts, Bonet, Aston, the full details of the controversy can be found in the Leibniz and Newton calculus controversy article
Abraham de Moivre
–
Abraham de Moivre
Abraham de Moivre
–
Doctrine of chances, 1761
24.
Roger Cotes
–
Roger Cotes FRS was an English mathematician, known for working closely with Isaac Newton by proofreading the second edition of his famous book, the Principia, before publication. He also invented the quadrature formulas known as Newton–Cotes formulas and first introduced what is today as Eulers formula. He was the first Plumian Professor at Cambridge University from 1707 until his death, Cotes was born in Burbage, Leicestershire. His parents were Robert, the rector of Burbage, and his wife Grace née Farmer, Roger had an elder brother, Anthony and a younger sister, Susanna. At first Roger attended Leicester School where his talent was recognised. His aunt Hannah had married Rev. John Smith, and Smith took on the role of tutor to encourage Rogers talent, the Smiths son, Robert Smith, would become a close associate of Roger Cotes throughout his life. Cotes later studied at St Pauls School in London and entered Trinity College, Cambridge in 1699 and he graduated BA in 1702 and MA in 1706. Roger Cotess contributions to computational methods lie heavily in the fields of astronomy. Cotes began his career with a focus on astronomy. He became a fellow of Trinity College in 1707, and at age 26 he became the first Plumian Professor of Astronomy, on his appointment to professor, he opened a subscription list in an effort to provide an observatory for Trinity. Unfortunately, the still was unfinished when Cotes died, and was demolished in 1797. In correspondence with Isaac Newton, Cotes designed a telescope with a mirror revolving by clockwork. He recomputed the solar and planetary tables of Giovanni Domenico Cassini and John Flamsteed, finally, in 1707 he formed a school of physical sciences at Trinity in partnership with William Whiston. From 1709 to 1713, Cotes became heavily involved with the edition of Newtons Principia. The first edition of Principia had only a few copies printed and was in need of revision to include Newtons works and principles of lunar, Newton at first had a casual approach to the revision, since he had all but given up scientific work. However, through the passion displayed by Cotes, Newtons scientific hunger was once again reignited. The two spent nearly three and half years collaborating on the work, in which they fully deduce, from Newtons laws of motion, the theory of the moon, the equinoxes, only 750 copies of the second edition were printed. However, a copy from Amsterdam met all other demand
Roger Cotes
–
This bust was commissioned by Robert Smith and sculpted posthumously by Peter Scheemakers in 1758.
25.
Thomas Simpson
–
Thomas Simpson FRS was a British mathematician, inventor and eponym of Simpsons rule to approximate definite integrals. The attribution, as often in mathematics, can be debated, this rule had been found 100 years earlier by Johannes Kepler, Simpson was born in Market Bosworth, Leicestershire. The son of a weaver, Simpson taught himself mathematics, at the age of nineteen, he married a fifty-year old widow with two children. As a youth he became interested in astrology after seeing a solar eclipse and he also dabbled in divination and caused fits in a girl after raising a devil from her. After this incident, he and his wife had to flee to Derby and he moved with his wife and children to London at age twenty-five, where he supported his family by weaving during the day and teaching mathematics at night. From 1743, he taught mathematics at the Royal Military Academy, Simpson was a fellow of the Royal Society. In 1758, Simpson was elected a member of the Royal Swedish Academy of Sciences. He died in Market Bosworth, and was laid to rest in Sutton Cheney, a plaque inside the church commemorates him. In both works, Simpson cited De Moivres work and did not claim originality beyond the presentation of some more accurate data, who Doubtless Hath Solved the Same Otherwise, Philosophical Transactions of the Royal Society of London,6, pp. 2093–2096. Of further related interest are problems posed in the early 1750s by J. Orchard, in The British Palladium and this type of generalization was later popularized by Alfred Weber in 1909. In 1971, Luc-Normand Tellier found the first direct numerical solution of the Fermat, long before Von Thünen’s contributions, which go back to 1818, the Fermat point problem can be seen as the very beginning of space economy. In 1985, Luc-Normand Tellier formulated an all-new problem called the “attraction-repulsion problem” and that problem was later further analyzed by mathematicians like Chen, Hansen, Jaumard and Tuy, and Jalal and Krarup. Containing a Number of New Improvements on the Theory, robertson, Edmund F. Thomas Simpson, MacTutor History of Mathematics archive, University of St Andrews
Thomas Simpson
–
Miscellaneous tracts, 1768
26.
Pierre-Simon Laplace
–
Pierre-Simon, marquis de Laplace was an influential French scholar whose work was important to the development of mathematics, statistics, physics and astronomy. He summarized and extended the work of his predecessors in his five-volume Mécanique Céleste and this work translated the geometric study of classical mechanics to one based on calculus, opening up a broader range of problems. In statistics, the Bayesian interpretation of probability was developed mainly by Laplace, Laplace formulated Laplaces equation, and pioneered the Laplace transform which appears in many branches of mathematical physics, a field that he took a leading role in forming. The Laplacian differential operator, widely used in mathematics, is named after him. Laplace is remembered as one of the greatest scientists of all time, sometimes referred to as the French Newton or Newton of France, he has been described as possessing a phenomenal natural mathematical faculty superior to that of any of his contemporaries. Laplace became a count of the Empire in 1806 and was named a marquis in 1817, Laplace was born in Beaumont-en-Auge, Normandy on 23 March 1749, a village four miles west of Pont lEveque in Normandy. According to W. W. Rouse Ball, His father, Pierre de Laplace and his great-uncle, Maitre Oliver de Laplace, had held the title of Chirurgien Royal. It would seem that from a pupil he became an usher in the school at Beaumont, however, Karl Pearson is scathing about the inaccuracies in Rouse Balls account and states, Indeed Caen was probably in Laplaces day the most intellectually active of all the towns of Normandy. It was here that Laplace was educated and was provisionally a professor and it was here he wrote his first paper published in the Mélanges of the Royal Society of Turin, Tome iv. 1766–1769, at least two years before he went at 22 or 23 to Paris in 1771, thus before he was 20 he was in touch with Lagrange in Turin. He did not go to Paris a raw self-taught country lad with only a peasant background, the École Militaire of Beaumont did not replace the old school until 1776. His parents were from comfortable families and his father was Pierre Laplace, and his mother was Marie-Anne Sochon. The Laplace family was involved in agriculture until at least 1750, Pierre Simon Laplace attended a school in the village run at a Benedictine priory, his father intending that he be ordained in the Roman Catholic Church. At sixteen, to further his fathers intention, he was sent to the University of Caen to read theology, at the university, he was mentored by two enthusiastic teachers of mathematics, Christophe Gadbled and Pierre Le Canu, who awoke his zeal for the subject. Here Laplaces brilliance as a mathematician was recognised and while still at Caen he wrote a memoir Sur le Calcul integral aux differences infiniment petites et aux differences finies. About this time, recognizing that he had no vocation for the priesthood, in this connection reference may perhaps be made to the statement, which has appeared in some notices of him, that he broke altogether with the church and became an atheist. Laplace did not graduate in theology but left for Paris with a letter of introduction from Le Canu to Jean le Rond dAlembert who at time was supreme in scientific circles. According to his great-great-grandson, dAlembert received him rather poorly, and to get rid of him gave him a mathematics book
Pierre-Simon Laplace
–
Pierre-Simon Laplace (1749–1827). Posthumous portrait by Jean-Baptiste Paulin Guérin, 1838.
Pierre-Simon Laplace
–
Laplace's house at Arcueil.
Pierre-Simon Laplace
–
Laplace.
Pierre-Simon Laplace
–
Tomb of Pierre-Simon Laplace
27.
Method of least squares
–
The method of least squares is a standard approach in regression analysis to the approximate solution of overdetermined systems, i. e. sets of equations in which there are more equations than unknowns. Least squares means that the overall solution minimizes the sum of the squares of the made in the results of every single equation. The most important application is in data fitting, the best fit in the least-squares sense minimizes the sum of squared residuals. Least squares problems fall into two categories, linear or ordinary least squares and non-linear least squares, depending on whether or not the residuals are linear in all unknowns, the linear least-squares problem occurs in statistical regression analysis, it has a closed-form solution. The non-linear problem is solved by iterative refinement, at each iteration the system is approximated by a linear one. Polynomial least squares describes the variance in a prediction of the dependent variable as a function of the independent variable, when the observations come from an exponential family and mild conditions are satisfied, least-squares estimates and maximum-likelihood estimates are identical. The method of least squares can also be derived as a method of moments estimator, the following discussion is mostly presented in terms of linear functions but the use of least-squares is valid and practical for more general families of functions. Also, by iteratively applying local quadratic approximation to the likelihood, for the topic of approximating a function by a sum of others using an objective function based on squared distances, see least squares. The least-squares method is credited to Carl Friedrich Gauss. The accurate description of the behavior of bodies was the key to enabling ships to sail in open seas. The combination of different observations taken under the same conditions contrary to simply trying ones best to observe, the approach was known as the method of averages. The combination of different observations taken under different conditions, the method came to be known as the method of least absolute deviation. It was notably performed by Roger Joseph Boscovich in his work on the shape of the earth in 1757, the development of a criterion that can be evaluated to determine when the solution with the minimum error has been achieved. Laplace tried to specify a mathematical form of the probability density for the errors and he felt these to be the simplest assumptions he could make, and he had hoped to obtain the arithmetic mean as the best estimate. Instead, his estimator was the posterior median, the first clear and concise exposition of the method of least squares was published by Legendre in 1805. The technique is described as a procedure for fitting linear equations to data. The value of Legendres method of least squares was immediately recognized by leading astronomers, in 1809 Carl Friedrich Gauss published his method of calculating the orbits of celestial bodies. In that work he claimed to have been in possession of the method of least squares since 1795 and this naturally led to a priority dispute with Legendre
Method of least squares
–
Carl Friedrich Gauss
28.
Robert Adrain
–
Robert Adrain was an Irish mathematician, whose career was spent in the USA. He was considered one of the most brilliant mathematical minds of the time in America and he is chiefly remembered for his formulation of the method of least squares. He was born in Carrickfergus, County Antrim, Ireland, but left Ireland after being wounded in the uprising of the United Irishmen in 1798 and moved to Princeton. He taught mathematics at various schools in the United States and he was president of the York County Academy in York, Pennsylvania, from 1801 to 1805. He is chiefly remembered for his formulation of the method of least squares, Adrain certainly did not know of the work of C. F. Gauss on least squares, although it is possible that he had read A. M, Adrain was an editor of and contributor to the Mathematical Correspondent, the first mathematical journal in the United States. He was elected a Fellow of the American Academy of Arts, in 1825 he founded a somewhat more successful publication targeting a wider readership, The Mathematical Diary, which was published through 1832. Adrain was the father of Congressman Garnett B, Robert Adrain died in New Brunswick, New Jersey. He is commemorated by a plaque, unveiled at Carrickfergus by the Ulster History Circle. Attribution This article incorporates text from a now in the public domain, Adrain. Dublin, M. H. Gill & son, research concerning the probabilities of the errors which happen in making observations, &c. Vol. I, Article XIV, pp 93–109, philadelphia, William P. Farrand and Co.1808. Enseignements et éditions, de Robert Adrain à la genèse nationale d’une discipline, », université de Nantes, Centre François Viète. Mathematical statistics in the early States
Robert Adrain
–
Robert Adrain
29.
Carl Friedrich Gauss
–
Johann Carl Friedrich Gauss was born on 30 April 1777 in Brunswick, in the Duchy of Brunswick-Wolfenbüttel, as the son of poor working-class parents. Gauss later solved this puzzle about his birthdate in the context of finding the date of Easter and he was christened and confirmed in a church near the school he attended as a child. A contested story relates that, when he was eight, he figured out how to add up all the numbers from 1 to 100, there are many other anecdotes about his precocity while a toddler, and he made his first ground-breaking mathematical discoveries while still a teenager. He completed Disquisitiones Arithmeticae, his opus, in 1798 at the age of 21. This work was fundamental in consolidating number theory as a discipline and has shaped the field to the present day, while at university, Gauss independently rediscovered several important theorems. Gauss was so pleased by this result that he requested that a regular heptadecagon be inscribed on his tombstone, the stonemason declined, stating that the difficult construction would essentially look like a circle. The year 1796 was most productive for both Gauss and number theory and he discovered a construction of the heptadecagon on 30 March. He further advanced modular arithmetic, greatly simplifying manipulations in number theory, on 8 April he became the first to prove the quadratic reciprocity law. This remarkably general law allows mathematicians to determine the solvability of any quadratic equation in modular arithmetic, the prime number theorem, conjectured on 31 May, gives a good understanding of how the prime numbers are distributed among the integers. Gauss also discovered that every integer is representable as a sum of at most three triangular numbers on 10 July and then jotted down in his diary the note, ΕΥΡΗΚΑ. On October 1 he published a result on the number of solutions of polynomials with coefficients in finite fields, in 1831 Gauss developed a fruitful collaboration with the physics professor Wilhelm Weber, leading to new knowledge in magnetism and the discovery of Kirchhoffs circuit laws in electricity. It was during this time that he formulated his namesake law and they constructed the first electromechanical telegraph in 1833, which connected the observatory with the institute for physics in Göttingen. In 1840, Gauss published his influential Dioptrische Untersuchungen, in which he gave the first systematic analysis on the formation of images under a paraxial approximation. Among his results, Gauss showed that under a paraxial approximation an optical system can be characterized by its cardinal points and he derived the Gaussian lens formula. In 1845, he became associated member of the Royal Institute of the Netherlands, in 1854, Gauss selected the topic for Bernhard Riemanns Habilitationvortrag, Über die Hypothesen, welche der Geometrie zu Grunde liegen. On the way home from Riemanns lecture, Weber reported that Gauss was full of praise, Gauss died in Göttingen, on 23 February 1855 and is interred in the Albani Cemetery there. Two individuals gave eulogies at his funeral, Gausss son-in-law Heinrich Ewald and Wolfgang Sartorius von Waltershausen and his brain was preserved and was studied by Rudolf Wagner who found its mass to be 1,492 grams and the cerebral area equal to 219,588 square millimeters. Highly developed convolutions were also found, which in the early 20th century were suggested as the explanation of his genius, Gauss was a Lutheran Protestant, a member of the St. Albans Evangelical Lutheran church in Göttingen
Carl Friedrich Gauss
–
Carl Friedrich Gauß (1777–1855), painted by Christian Albrecht Jensen
Carl Friedrich Gauss
–
Statue of Gauss at his birthplace, Brunswick
Carl Friedrich Gauss
–
Title page of Gauss's Disquisitiones Arithmeticae
Carl Friedrich Gauss
–
Gauss's portrait published in Astronomische Nachrichten 1828
30.
W. F. Donkin
–
William Fishburn Donkin FRS FRAS was an astronomer and mathematician, Savilian Professor of Astronomy at the University of Oxford. He was born at Bishop Burton, Yorkshire, on 15 February 1814 and his parents were Thomas Donkin and Alice née Bateman. Two of his uncles were Bryan Donkin and Thomas Bateman and he was educated at St Peters School, York, and in 1832 entered St Edmund Hall, Oxford. He proceeded B. A.25 May 1836, and M. A.1839 and he was elected as a fellow of University College, and he continued for about six years at St Edmund Hall in the capacity of mathematical lecturer. In 1842, Donkin was elected Savilian professor of astronomy at Oxford, in succession to George Johnson, soon afterwards he was elected a Fellow of the Royal Society, and also of the Royal Astronomical Society. In 1844, he married the daughter of the Rev. John Hawtrey of Guernsey. Donkins poor health compelled him to live abroad during the latter part of his life. There is a list of his papers, sixteen in number, early works were an Essay on the Theory of the Combination of Observations for the Ashmolean Society, and articles on ancient Greek music for William Smiths Dictionary of Antiquities. In 1861, he read a paper to the Royal Astronomical Society on The Secular Acceleration of the Moons Mean Motion, Donkin was also a contributor to the Philosophical Magazine. In June 1850 he explained the algebra of quaternions and spatial rotation and his last paper, a Note on Certain Statements in Elementary Works concerning the Specific Heat of Gases, appeared in 1864. In 1867 he began work on Acoustics, the first volume was put to press in 1870 by Bartholomew Price after Donkin died. The text studies vibrations, particularly transverse vibrations of a string, longitudinal vibrations of an elastic rod. Attribution This article incorporates text from a now in the public domain, Donkin. Harrison, W. J. Donkin, William Fishburn
W. F. Donkin
–
William F. Donkin (US Naval Observatory Library)
31.
Augustus De Morgan
–
Augustus De Morgan was a British mathematician and logician. He formulated De Morgans laws and introduced the mathematical induction. Augustus De Morgan was born in Madurai, India in 1806 and his father was Lieut. -Colonel John De Morgan, who held various appointments in the service of the East India Company. His mother, Elizabeth Dodson descended from James Dodson, who computed a table of anti-logarithms, that is, Augustus De Morgan became blind in one eye a month or two after he was born. The family moved to England when Augustus was seven months old, when De Morgan was ten years old, his father died. Mrs. De Morgan resided at various places in the southwest of England and his mathematical talents went unnoticed until he was fourteen, when a family-friend discovered him making an elaborate drawing of a figure in Euclid with ruler and compasses. She explained the aim of Euclid to Augustus, and gave him an initiation into demonstration and he received his secondary education from Mr. Parsons, a fellow of Oriel College, Oxford, who appreciated classics better than mathematics. His mother was an active and ardent member of the Church of England, and desired that her son should become a clergyman, I shall use the world Anti-Deism to signify the opinion that there does not exist a Creator who made and sustains the Universe. His college tutor was John Philips Higman, FRS, at college he played the flute for recreation and was prominent in the musical clubs. His love of knowledge for its own sake interfered with training for the great mathematical race, as a consequence he came out fourth wrangler. This entitled him to the degree of Bachelor of Arts, but to take the degree of Master of Arts. To the signing of any such test De Morgan felt a strong objection, in about 1875 theological tests for academic degrees were abolished in the Universities of Oxford and Cambridge. As no career was open to him at his own university, he decided to go to the Bar, and took up residence in London, about this time the movement for founding London University took shape. A body of liberal-minded men resolved to meet the difficulty by establishing in London a University on the principle of religious neutrality, De Morgan, then 22 years of age, was appointed professor of mathematics. His introductory lecture On the study of mathematics is a discourse upon mental education of permanent value, the London University was a new institution, and the relations of the Council of management, the Senate of professors and the body of students were not well defined. A dispute arose between the professor of anatomy and his students, and in consequence of the action taken by the Council, another professor of mathematics was appointed, who then drowned a few years later. De Morgan had shown himself a prince of teachers, he was invited to return to his chair and its object was to spread scientific and other knowledge by means of cheap and clearly written treatises by the best writers of the time. One of its most voluminous and effective writers was De Morgan, when De Morgan came to reside in London he found a congenial friend in William Frend, notwithstanding his mathematical heresy about negative quantities
Augustus De Morgan
–
Augustus De Morgan (1806-1871)
Augustus De Morgan
–
Augustus De Morgan.
32.
Giovanni Schiaparelli
–
Giovanni Virginio Schiaparelli was an Italian astronomer and science historian. He was educated at the University of Turin, and later studied at Berlin Observatory, in 1859–1860 he worked in Pulkovo Observatory near St Petersburg, and then worked for over forty years at Brera Observatory in Milan. Among Schiaparellis contributions are his telescopic observations of Mars, in his initial observations, he named the seas and continents of Mars. While the term indicates an artificial construction, the term channels connotes that the observed features were natural configurations of the planetary surface. Later, with thanks to the observations of the Italian astronomer Vincenzo Cerulli. I have already pointed out that, in the absence of rain on Mars and he proved, for example, that the orbit of the Leonid meteor shower coincided with that of the comet Tempel-Tuttle. These observations led the astronomer to formulate the hypothesis, subsequently proved to be correct and he was also a keen observer of the inner planets Mercury and Venus. He made several drawings and determined their rotation periods, in 1965, it was shown that his and most other subsequent measurements of Mercurys period were incorrect. Schiaparelli was a scholar of the history of classical astronomy, lalande Prize Gold Medal of the Royal Astronomical Society Bruce Medal The main-belt asteroid 4062 Schiaparelli, named on 15 September 1989. The lunar crater Schiaparelli The Martian crater Schiaparelli Schiaparelli Dorsum on Mercury The 2016 ExoMars Schiaparelli lander and his niece, Elsa Schiaparelli, became a noted designer or maker of haute couture. 1873 – Le stelle cadenti 1893 – La vita sul pianeta Marte 1925 – Scritti sulla storia della astronomia antica in three volumes, Schiaparelli, Giovanni Virginio, biography from www. daviddarling. info. Obituaries, G. V. Schiaparelli, J. G. Galle, J. B. N. Hennessey J. Coles, J. E. Gore, The Observatory, Vol.33, p. 311–318, August 1910 Source texts from Wikisource in Italian and English. Le Mani su Marte, I diari di G. V, Works by Giovanni Virginio Schiaparelli at Project Gutenberg Works by or about Giovanni Schiaparelli at Internet Archive AN185 193/194 ApJ32313 MNRAS71282 PASP22164
Giovanni Schiaparelli
–
Giovanni Schiaparelli
Giovanni Schiaparelli
–
Schiaparelli's grave at the Monumental Cemetery of Milan, Italy
33.
Karl Pearson
–
Karl Pearson FRS was an influential English mathematician and biostatistician. He has been credited with establishing the discipline of mathematical statistics, Pearson was also a protégé and biographer of Sir Francis Galton. Pearson was born in Islington, London to William and Fanny and he then travelled to Germany to study physics at the University of Heidelberg under G H Quincke and metaphysics under Kuno Fischer. He next visited the University of Berlin, where he attended the lectures of the famous physiologist Emil du Bois-Reymond on Darwinism, Pearson also studied Roman Law, taught by Bruns and Mommsen, medieval and 16th century German Literature, and Socialism. He became an historian and Germanist and spent much of the 1880s in Berlin, Heidelberg, Vienna, Saig bei Lenzkirch. He wrote on Passion plays, religion, Goethe, Werther, as well as sex-related themes, Pearson was offered a Germanics post at Kings College, Cambridge. Comparing Cambridge students to those he knew from Germany, Karl found German students inathletic and he wrote his mother, I used to think athletics and sport was overestimated at Cambridge, but now I think it cannot be too highly valued. Have you ever attempted to conceive all there is in the world worth knowing—that not one subject in the universe is unworthy of study, mankind seems on the verge of a new and glorious discovery. What Newton did to simplify the planetary motions must now be done to unite in one whole the various isolated theories of mathematical physics, Pearson then returned to London to study law, emulating his father. His next career move was to the Inner Temple, where he read law until 1881, after this, he returned to mathematics, deputising for the mathematics professor at Kings College, London in 1881 and for the professor at University College, London in 1883. In 1884, he was appointed to the Goldsmid Chair of Applied Mathematics and Mechanics at University College, Pearson became the editor of Common Sense of the Exact Sciences when William Kingdon Clifford died. The collaboration, in biometry and evolutionary theory, was a fruitful one, Weldon introduced Pearson to Charles Darwins cousin Francis Galton, who was interested in aspects of evolution such as heredity and eugenics. Pearson became Galtons protégé—his statistical heir as some have put it—at times to the verge of hero worship, in 1890 Pearson married Maria Sharpe. Maria died in 1928 and in 1929 Karl married Margaret Victoria Child and he and his family lived at 7 Well Road in Hampstead, now marked with a blue plaque. He predicted that Galton, rather than Charles Darwin, would be remembered as the most prodigious grandson of Erasmus Darwin, when Galton died, he left the residue of his estate to the University of London for a Chair in Eugenics. Pearson was the first holder of this chair — the Galton Chair of Eugenics and he formed the Department of Applied Statistics, into which he incorporated the Biometric and Galton laboratories. He remained with the department until his retirement in 1933, Pearson was a zealous atheist and a freethinker. This book covered several themes that were later to become part of the theories of Einstein, Pearson asserted that the laws of nature are relative to the perceptive ability of the observer
Karl Pearson
–
Portrait of Karl Pearson, by Elliott & Fry, 1890.
Karl Pearson
–
Karl Pearson at work, 1910.
34.
George Boole
–
George Boole was an English mathematician, educator, philosopher and logician. He worked in the fields of differential equations and algebraic logic, Boolean logic is credited with laying the foundations for the information age. Boole was born in Lincoln, Lincolnshire, England, the son of John Boole Sr and he had a primary school education, and received lessons from his father, but had little further formal and academic teaching. William Brooke, a bookseller in Lincoln, may have helped him with Latin and he was self-taught in modern languages. At age 16 Boole became the breadwinner for his parents and three siblings, taking up a junior teaching position in Doncaster at Heighams School. Boole participated in the Mechanics Institute, in the Greyfriars, Lincoln, without a teacher, it took him many years to master calculus. At age 19, Boole successfully established his own school in Lincoln, four years later he took over Halls Academy in Waddington, outside Lincoln, following the death of Robert Hall. In 1840 he moved back to Lincoln, where he ran a boarding school, Boole became a prominent local figure, an admirer of John Kaye, the bishop. He took part in the campaign for early closing. With E. R. Larken and others he set up a society in 1847. He associated also with the Chartist Thomas Cooper, whose wife was a relation, from 1838 onwards Boole was making contacts with sympathetic British academic mathematicians and reading more widely. He studied algebra in the form of symbolic methods, as far as these were understood at the time, Booles status as mathematician was recognised by his appointment in 1849 as the first professor of mathematics at Queens College, Cork in Ireland. He met his wife, Mary Everest, there in 1850 while she was visiting her uncle John Ryall who was Professor of Greek. They married some years later in 1855 and he maintained his ties with Lincoln, working there with E. R. Larken in a campaign to reduce prostitution. Boole was awarded the Keith Medal by the Royal Society of Edinburgh in 1855 and was elected a Fellow of the Royal Society in 1857 and he received honorary degrees of LL. D. from the University of Dublin and the University of Oxford. In late November 1864, Boole walked, in rain, from his home at Lichfield Cottage in Ballintemple to the university. He soon became ill, developing a cold and high fever. As his wife believed that remedies should resemble their cause, she put her husband to bed and poured buckets of water over him – the wet having brought on his illness, Booles condition worsened and on 8 December 1864, he died of fever-induced pleural effusion
George Boole
–
Boole in about 1860
George Boole
–
Boole's House and School at 3 Pottergate in Lincoln
George Boole
–
Plaque from the house in Lincoln
George Boole
–
The house at 5 Grenville Place in Cork, in which Boole lived between 1849 and 1855, and where he wrote The Laws of Thought
35.
Andrey Markov
–
Andrey Andreyevich Markov was a Russian mathematician. He is best known for his work on stochastic processes, a primary subject of his research later became known as Markov chains and Markov processes. Markov and his younger brother Vladimir Andreevich Markov proved Markov brothers inequality and his son, another Andrei Andreevich Markov, was also a notable mathematician, making contributions to constructive mathematics and recursive function theory. Andrey Markov was born on 14 June 1856 in Russia and he attended Petersburg Grammar, where he was seen as a rebellious student by a select few teachers. In his academics he performed poorly in most subjects other than mathematics, later in life he attended Petersburg University and was lectured by Pafnuty Chebyshev. Among his teachers were Yulian Sokhotski, Konstantin Posse, Yegor Zolotarev, Pafnuty Chebyshev, Aleksandr Korkin, Mikhail Okatov, Osip Somov and he completed his studies at the University and was later asked if he would like to stay and have a career as a Mathematician. He later taught at schools and continued his own mathematical studies. In this time he found a use for his mathematical skills. He figured out that he could use chains to model the alliteration of vowels and he also contributed to many other mathematical aspects in his time. He died at age 66 on 20 July 1922, during the following year, he passed the candidates examinations, and he remained at the university to prepare for a lecturers position. In April 1880, Markov defended his masters thesis About Binary Quadratic Forms with Positive Determinant, five years later, in January 1885, there followed his doctoral thesis About Some Applications of Algebraic Continuous Fractions. His pedagogical work began after the defense of his masters thesis in autumn 1880, as a privatdozent he lectured on differential and integral calculus. Later he lectured alternately on introduction to analysis, probability theory, from 1895 through 1905 he also lectured in differential calculus. One year after the defense of his thesis, Markov was appointed extraordinary professor. In 1890, after the death of Viktor Bunyakovsky, Markov became a member of the academy. His promotion to a professor of St. Petersburg University followed in the fall of 1894. In 1896, Markov was elected a member of the academy as the successor of Chebyshev. In 1905, he was appointed merited professor and was granted the right to retire, until 1910, however, he continued to lecture in the calculus of differences
Andrey Markov
–
Andrey (Andrei) Andreyevich Markov
Andrey Markov
–
Markov in 1886
Andrey Markov
–
Markov's headstone
36.
Markov chains
–
In probability theory and related fields, a Markov process, named after the Russian mathematician Andrey Markov, is a stochastic process that satisfies the Markov property. e. Conditional on the present state of the system, its future, a Markov chain is a type of Markov process that has either discrete state space or discrete index set, but the precise definition of a Markov chain varies. Andrey Markov studied Markov processes in the early 20th century, publishing his first paper on the topic in 1906, random walks on the integers and the Gamblers ruin problem are examples of Markov processes and were studied hundreds of years earlier. These two processes are Markov processes in time, while random walks on the integers and the Gamblers ruin problem are examples of Markov processes in discrete time. The algorithm known as PageRank, which was proposed for the internet search engine Google, is based on a Markov process. The adjective Markovian is used to something that is related to a Markov process. A Markov chain is a process with the Markov property. The term Markov chain refers to the sequence of variables such a process moves through. It can thus be used for describing systems that follow a chain of linked events, the systems state space and time parameter index need to be specified. In addition, there are extensions of Markov processes that are referred to as such. Moreover, the index need not necessarily be real-valued, like with the state space. Notice that the state space continuous-time Markov chain is general to such a degree that it has no designated term. While the time parameter is usually discrete, the space of a Markov chain does not have any generally agreed-on restrictions. However, many applications of Markov chains employ finite or countably infinite state spaces, besides time-index and state-space parameters, there are many other variations, extensions and generalizations. For simplicity, most of this article concentrates on the discrete-time, discrete state-space case, the changes of state of the system are called transitions. The probabilities associated with state changes are called transition probabilities. The process is characterized by a space, a transition matrix describing the probabilities of particular transitions. By convention, we assume all possible states and transitions have been included in the definition of the process, so there is always a next state, and the process does not terminate
Markov chains
–
Russian mathematician Andrey Markov.
Markov chains
–
A simple two-state Markov chain
37.
Stochastic process
–
In probability theory and related fields, a stochastic or random process is a mathematical object usually defined as a collection of random variables. Stochastic processes are used as mathematical models of systems and phenomena that appear to vary in a random manner. Furthermore, seemingly random changes in financial markets have motivated the use of stochastic processes in finance. Applications and the study of phenomena have in turn inspired the proposal of new stochastic processes. Examples of such processes include the Wiener process or Brownian motion process, used by Louis Bachelier to study price changes on the Paris Bourse. Erlang to study the number phone calls occurring in a period of time. The term random function is used to refer to a stochastic or random process. The terms stochastic process and random process are used interchangeably, often no specific mathematical space for the set that indexes the random variables. But often these two terms are used when the variables are indexed by the integers or an interval of the real line. If the random variables are indexed by the Cartesian plane or some higher-dimensional Euclidean space, the values of a stochastic process are not always numbers and can be vectors or other mathematical objects. The theory of processes is considered to be an important contribution to mathematics. The set used to index the random variables is called the index set, historically, the index set was some subset of the real line, such as the natural numbers, giving the index set the interpretation of time. Each random variable in the collection takes values from the same space known as the state space. This state space can be, for example, the integers, an increment is the amount that a stochastic process changes between two index values, often interpreted as two points in time. A stochastic process can have many outcomes, due its randomness, and an outcome of a stochastic process is called, among other names. A stochastic process can be classified in different ways, for example, by its space, its index set. One common way of classification is by the cardinality of the index set, if the index set is some interval of the real line, then time is said to be continuous. The two types of processes are respectively referred to as discrete-time and continuous-time stochastic processes
Stochastic process
–
Stock market fluctuations have been modeled by stochastic processes.
38.
History of statistics
–
The history of statistics can be said to start around 1749 although, over time, there have been changes to the interpretation of the word statistics. In early times, the meaning was restricted to information about states and this was later extended to include all collections of information of all types, and later still it was extended to include the analysis and interpretation of such data. In modern terms, statistics means both sets of collected information, as in national accounts and temperature records, and analytical work which requires statistical inference. Statistical activities are associated with models expressed using probabilities, and require probability theory for them to be put on a firm theoretical basis. A number of concepts have an important impact on a wide range of sciences. By the 18th century, the term designated the systematic collection of demographic and economic data by states. For at least two millennia, these data were mainly tabulations of human and material resources that might be taxed or put to military use. In the early 19th century, collection intensified, and the meaning of statistics broadened to include the discipline concerned with the collection, summary, and analysis of data. Today, data are collected and statistics are computed and widely distributed in government, business, most of the sciences and sports, electronic computers have expedited more elaborate statistical computation even as they have facilitated the collection and aggregation of data. A single data analyst may have available a set of data-files with millions of records and these were collected over time from computer activity or from computerized sensors, point-of-sale registers, and so on. The term mathematical statistics designates the mathematical theories of probability and statistical inference, the relation between statistics and probability theory developed rather late, however. In the 19th century, statistics increasingly used probability theory, whose results were found in the 17th and 18th centuries. By 1800, astronomy used probability models and statistical theories, particularly the method of least squares, much of the theoretical work was readily available by the time computers were available to exploit them. By the 1970s, Johnson and Kotz produced a four-volume Compendium on Statistical Distributions, applied statistics can be regarded as not a field of mathematics but an autonomous mathematical science, like computer science and operations research. Unlike mathematics, statistics had its origins in public administration, Applications arose early in demography and economics, large areas of micro- and macro-economics today are statistics with an emphasis on time-series analyses. With its emphasis on learning from data and making best predictions, statistics also has been shaped by areas of research including psychological testing, medicine. The ideas of testing have considerable overlap with decision science. With its concerns with searching and effectively presenting data, statistics has overlap with information science, look up statistics in Wiktionary, the free dictionary
History of statistics
History of statistics
–
Sir William Petty, a 17th-century economist who used early statistical methods to analyse demographic data.
History of statistics
–
Carl Friedrich Gauss, mathematician who developed the method of least squares in 1809.
History of statistics
–
Karl Pearson, the founder of mathematical statistics.
39.
Kolmogorov
–
Andrey Kolmogorov was born in Tambov, about 500 kilometers south-southeast of Moscow, in 1903. Kolmogorova, died giving birth to him, Andrey was raised by two of his aunts in Tunoshna at the estate of his grandfather, a well-to-do nobleman. Little is known about Andreys father and he was supposedly named Nikolai Matveevich Kataev and had been an agronomist. Nikolai had been exiled from St. Petersburg to the Yaroslavl province after his participation in the movement against the czars. He disappeared in 1919 and he was presumed to have killed in the Russian Civil War. Andrey Kolmogorov was educated in his aunt Veras village school, and his earliest literary efforts, Andrey was the editor of the mathematical section of this journal. In 1910, his aunt adopted him, and they moved to Moscow, later that same year, Kolmogorov began to study at the Moscow State University and at the same time Mendeleev Moscow Institute of Chemistry and Technology. Kolmogorov writes about this time, I arrived at Moscow University with a knowledge of mathematics. I knew in particular the beginning of set theory, I studied many questions in articles in the Encyclopedia of Brockhaus and Efron, filling out for myself what was presented too concisely in these articles. Kolmogorov gained a reputation for his wide-ranging erudition, during the same period, Kolmogorov worked out and proved several results in set theory and in the theory of Fourier series. In 1922, Kolmogorov gained international recognition for constructing a Fourier series that diverges almost everywhere, around this time, he decided to devote his life to mathematics. In 1925, Kolmogorov graduated from the Moscow State University and began to study under the supervision of Nikolai Luzin, Kolmogorov became interested in probability theory. In 1929, Kolmogorov earned his Doctor of Philosophy degree, from Moscow State University, in 1930, Kolmogorov went on his first long trip abroad, traveling to Göttingen and Munich, and then to Paris. He had various contacts in Göttingen. His pioneering work, About the Analytical Methods of Probability Theory, was published in 1931, also in 1931, he became a professor at the Moscow State University. In 1935, Kolmogorov became the first chairman of the department of probability theory at the Moscow State University, around the same years Kolmogorov contributed to the field of ecology and generalized the Lotka–Volterra model of predator-prey systems. In 1936, Kolmogorov and Alexandrov were involved in the persecution of their common teacher Nikolai Luzin, in the so-called Luzin affair. In a 1938 paper, Kolmogorov established the basic theorems for smoothing and predicting stationary stochastic processes—a paper that had military applications during the Cold War
Kolmogorov
–
Andrey Kolmogorov
Kolmogorov
–
Kolmogorov (left) delivers a talk at a Soviet information theory symposium. (Tallinn, 1973).
Kolmogorov
–
Kolmogorov works on his talk (Tallinn, 1973).
40.
Risk
–
Risk is the potential of gaining or losing something of value. Values can be gained or lost when taking risk resulting from an action or inaction. Risk can also be defined as the interaction with uncertainty. Uncertainty is a potential, unpredictable, and uncontrollable outcome, risk is a consequence of action taken in spite of uncertainty, Risk perception is the subjective judgment people make about the severity and probability of a risk, and may vary person to person. Any human endeavor carries some risk, but some are much riskier than others, the Oxford English Dictionary cites the earliest use of the word in English as of 1621, and the spelling as risk from 1655. It defines risk as, the possibility of loss, injury, or other adverse or unwelcome circumstance, Risk is an uncertain event or condition that, if it occurs, has an effect on at least one objective. The probability of something happening multiplied by the resulting cost or benefit if it does, finance, The possibility that an actual return on an investment will be lower than the expected return. Insurance, A situation where the probability of a variable is known, a risk is not an uncertainty, a peril, or a hazard. Securities trading, The probability of a loss or drop in value, non-systematic risk is any risk that isnt market-related. Also called non-market risk, extra-market risk or diversifiable risk, workplace, Product of the consequence and probability of a hazardous event or phenomenon. For example, the risk of developing cancer is estimated as the probability of developing cancer over a lifetime as a result of exposure to potential carcinogens. The International Organization for Standardization publication ISO31000 / ISO Guide 73,2002 definition of risk is the effect of uncertainty on objectives, in this definition, uncertainties include events and uncertainties caused by ambiguity or a lack of information. It also includes both negative and positive impacts on objectives, very different approaches to risk management are taken in different fields, e. g. Risk is the unwanted subset of a set of uncertain outcomes. Risk can be seen as relating to the probability of future events. For example, according to analysis of information risk, risk is. In computer science this definition is used by The Open Group, OHSAS defines risk as the combination of the probability of a hazard resulting in an adverse event, and the severity of the event. In information security risk is defined as the potential that a threat will exploit vulnerabilities of an asset or group of assets. Financial risk is defined as the unpredictable variability or volatility of returns
Risk
–
Firefighters at work
41.
Market (economics)
–
A market is one of the many varieties of systems, institutions, procedures, social relations and infrastructures whereby parties engage in exchange. While parties may exchange goods and services by barter, most markets rely on sellers offering their goods or services in exchange for money from buyers and it can be said that a market is the process by which the prices of goods and services are established. Markets facilitate trade and enable the distribution and allocation of resources in a society, Markets allow any trade-able item to be evaluated and priced. A market emerges more or less spontaneously or may be constructed deliberately by human interaction in order to enable the exchange of rights of services, Markets can also be worldwide, for example the global diamond trade. National economies can be classified, for example as developed markets or developing markets, in mainstream economics, the concept of a market is any structure that allows buyers and sellers to exchange any type of goods, services and information. The exchange of goods or services, with or without money, is a transaction, a major topic of debate is how much a given market can be considered to be a free market, that is free from government intervention. However it is not always clear how the allocation of resources can be improved since there is always the possibility of government failure, a market is one of the many varieties of systems, institutions, procedures, social relations and infrastructures whereby parties engage in exchange. While parties may exchange goods and services by barter, most markets rely on sellers offering their goods or services in exchange for money from buyers and it can be said that a market is the process by which the prices of goods and services are established. Markets facilitate trade and enables the distribution and allocation of resources in a society, Markets allow any trade-able item to be evaluated and priced. A market sometimes emerges more or less spontaneously but is often constructed deliberately by human interaction in order to enable the exchange of rights of services. Markets of varying types can spontaneously arise whenever a party has interest in a good or service that other party can provide. Hence there can be a market for cigarettes in correctional facilities, another for chewing gum in a playground, and yet another for contracts for the future delivery of a commodity. Markets vary in form, scale, location, and types of participants, as well as the types of goods and services traded, nevertheless, violence and they apply the market dynamics to facilitate information aggregation. However, market prices may be distorted by a seller or sellers with monopoly power, such price distortions can have an adverse effect on market participants welfare and reduce the efficiency of market outcomes. Also, the level of organization and negotiating power of buyers and sellers markedly affects the functioning of the market. Markets are a system, and systems have structure, the structure of a well-functioning market is defined by the theory of perfect competition. Market failures are often associated with time-inconsistent preferences, information asymmetries, non-perfectly competitive markets, principal–agent problems, externalities, among the major negative externalities which can occur as a side effect of production and market exchange, are air pollution and environmental degradation. There exists a popular thought, especially among economists, that markets would have a structure of a perfect competition
Market (economics)
–
Financial markets
Market (economics)
–
Corn Exchange, in London circa 1809.
Market (economics)
–
A market in Râmnicu Vâlcea by Amedeo Preziosi.
Market (economics)
–
Cabbage market by Václav Malý.
42.
Actuarial science
–
Actuarial science is the discipline that applies mathematical and statistical methods to assess risk in insurance, finance and other industries and professions. Actuaries are professionals who are qualified in this field through intense education, in many countries, actuaries must demonstrate their competence by passing a series of thorough professional examinations. Actuarial science includes a number of interrelated subjects, including mathematics, probability theory, statistics, finance, economics, historically, actuarial science used deterministic models in the construction of tables and premiums. The science has gone through changes during the last 30 years due to the proliferation of high speed computers. Many universities have undergraduate and graduate programs in actuarial science. In 2010, a study published by job search website CareerCast ranked actuary as the #1 job in the United States, the study used five key criteria to rank jobs, environment, income, employment outlook, physical demands, and stress. A similar study by U. S. News & World Report in 2006 included actuaries among the 25 Best Professions that it expects will be in demand in the future. Actuarial science became a mathematical discipline in the late 17th century with the increased demand for long-term insurance coverage such as Burial, Life insurance. These long term coverage required that money be set aside to pay future benefits, such as annuity and this led to the development of an important actuarial concept, referred to as the Present value of a future sum. Certain aspects of the methods for discounting pension funds have come under criticism from modern financial economics. Contemporary life insurance programs have extended to include credit and mortgage insurance, key man insurance for small businesses, long term care insurance. The effects of consumer choice and the distribution of the utilization of medical services and procedures. These factors underlay the development of the Resource-Base Relative Value Scale at Harvard in a multi-disciplined study, actuarial science also aids in the design of benefit structures, reimbursement standards, and the effects of proposed government standards on the cost of healthcare. It is common with mergers and acquisitions that several pension plans have to be combined or at least administered on an equitable basis, benefit plans liabilities have to be properly valued, reflecting both earned benefits for past service, and the benefits for future service. Actuarial science is applied to Property, Casualty, Liability. In these forms of insurance, coverage is provided on a renewable period. Coverage can be cancelled at the end of the period by either party, Property and casualty insurance companies tend to specialize because of the complexity and diversity of risks. One division is to organize around personal and commercial lines of insurance, personal lines of insurance are for individuals and include fire, auto, homeowners, theft and umbrella coverages
Actuarial science
–
2003 US mortality (life) table, Table 1, Page 1
43.
Environmental regulation
–
The core environmental law regimes address environmental pollution. Other areas, such as environmental impact assessment, may not fit neatly into either category, early examples of legal enactments designed to consciously preserve the environment, for its own sake or human enjoyment, are found throughout history. In the common law, the protection was found in the law of nuisance. Thus smells emanating from pig stys, strict liability against dumping rubbish, private enforcement, however, was limited and found to be woefully inadequate to deal with major environmental threats, particularly threats to common resources. During the Great Stink of 1858, the dumping of sewerage into the River Thames began to smell so ghastly in the heat that Parliament had to be evacuated. In 19 days, Parliament passed a further Act to build the London sewerage system, London also suffered from terrible air pollution, and this culminated in the Great Smog of 1952, which in turn triggered its on legislative response, the Clean Air Act 1956. The basic regulatory structure was to set limits on emissions for households, notwithstanding early analogues, the concept of environmental law as a separate and distinct body of law is a twentieth-century development. Air quality laws govern the emission of air pollutants into the atmosphere, a specialized subset of air quality laws regulate the quality of air inside buildings. Air quality laws are designed specifically to protect human health by limiting or eliminating airborne pollutant concentrations. Regulatory efforts include identifying and categorizing air pollutants, setting limits on acceptable emissions levels, Water quality laws govern the release of pollutants into water resources, including surface water, ground water, and stored drinking water. Some water quality laws, such as drinking water regulations, may be designed solely with reference to human health, regulatory areas include sewage treatment and disposal, industrial and agricultural waste water management, and control of surface runoff from construction sites and urban environments. Waste management laws govern the transport, treatment, storage, and disposal of all manner of waste, including solid waste, hazardous waste. Regulatory efforts include identifying and categorizing waste types and mandating transport, treatment, storage, Environmental cleanup laws govern the removal of pollution or contaminants from environmental media such as soil, sediment, surface water, or ground water. Chemical safety laws govern the use of chemicals in human activities, as contrasted with media-oriented environmental laws, chemical control laws seek to manage the pollutants themselves. Regulatory efforts include banning specific chemical constituents in consumer products, Environmental impact assessment is the assessment of the environmental consequences of a plan, policy, program, or concrete projects prior to the decision to move forward with the proposed action. Environmental assessments may be governed by rules of administrative procedure regarding public participation and documentation of decision making, Water resources laws govern the ownership and use of water resources, including surface water and ground water. Regulatory areas may include water conservation, use restrictions, and ownership regimes, mineral resource laws cover several basic topics, including the ownership of the mineral resource and who can work them. Mining is also affected by regulations regarding the health and safety of miners
Environmental regulation
–
Industrial air pollution now regulated by air quality law.
Environmental regulation
–
A typical stormwater outfall, subject to water quality law.
Environmental regulation
–
A municipal landfill, operated pursuant to waste management law.
44.
Financial regulation
–
This may be handled by either a government or non-government organization. Financial regulation has also influenced the structure of banking sectors, by decreasing borrowing costs, reduction of financial crime – reducing the extent to which it is possible for a regulated business to be used for a purpose connected with financial crime. Regulating foreign participation in the financial markets, ati mathunzi // sup // Acts empower organizations, government or non-government, to monitor activities and enforce actions. There are various setups and combinations in place for the regulatory structure around the global. Leaf parts are in any case, Exchange acts ensure that trading on the exchanges is conducted in a proper manner, most prominent the pricing process, execution and settlement of trades, direct and efficient trade monitoring. Financial regulators ensure that companies and market participants comply with various regulations under the trading acts. The trading acts demands that listed companies publish regular financial reports, whereas market participants are required to Publish major shareholder notifications. Asset management supervision or investment acts ensures the frictionless operation of those vehicles, Banking acts lay down rules for banks which they have to observe when they are being established and when they are carrying on their business. These rules are designed to prevent unwelcome developments that might disrupt the functioning of the banking system. Thus ensuring a strong and efficient banking system, the following is a short listing of regulatory authorities in various jurisdictions, for a more complete listing, please see list of financial regulatory authorities by country. Sometimes more than one institution regulates and supervises the banking market, normally because, apart from regulatory authorities, the Eurozone countries are forming a Single Supervisory Mechanism under the European Central Bank as a prelude to Banking union. There are also associations of financial regulatory authorities, in essence, they forced European banks, and, more importantly, the European Central Bank itself e. g. Pr. ISBN 978-0-691-15264-6 Simpson, D. Meeks, G. Klumpes, P. & Andrews, some cost-benefit issues in financial regulation
Financial regulation
45.
Groupthink
–
Groupthink requires individuals to avoid raising controversial issues or alternative solutions, and there is loss of individual creativity, uniqueness and independent thinking. The dysfunctional group dynamics of the ingroup produces an illusion of invulnerability, thus the ingroup significantly overrates its own abilities in decision-making and significantly underrates the abilities of its opponents. Furthermore, groupthink can produce dehumanizing actions against the outgroup, antecedent factors such as group cohesiveness, faulty group structure, and situational context play into the likelihood of whether or not groupthink will impact the decision-making process. Most of the research on groupthink was conducted by Irving Janis. Janis published a book in 1972, which was revised in 1982. Janis used the Bay of Pigs disaster and the Japanese attack on Pearl Harbor in 1941 as his two prime case studies, later studies have evaluated and reformulated his groupthink model. William H. Whyte, Jr. coined the term in 1952 in Fortune magazine, Groupthink being a coinage – and, admittedly and we are not talking about mere instinctive conformity – it is, after all, a perennial failing of mankind. What we are talking about is a rationalized conformity – an open, articulate philosophy which holds that group values are not only expedient but right, Irving Janis pioneered the initial research on the groupthink theory. He does not cite Whyte, but coined the term by analogy with doublethink, Groupthink is a term of the same order as the words in the newspeak vocabulary George Orwell used in his dismaying world of 1984. In that context, groupthink takes on an invidious connotation, exactly such a connotation is intended, since the term refers to a deterioration in mental efficiency, reality testing and moral judgments as a result of group pressures. Janis set the foundation for the study of groupthink starting with his research in the American Soldier Project where he studied the effect of stress on group cohesiveness. After this study he remained interested in the ways in which people make decisions under external threats and he concluded that in each of these cases, the decisions occurred largely because of groupthink, which prevented contradictory views from being expressed and subsequently evaluated. These events included Nazi Germanys decision to invade the Soviet Union in 1941, despite the popularity of the concept of groupthink, fewer than two dozen studies addressed the phenomenon itself following the publication of Victims of Groupthink, between the years 1972 and 1998. This is seen in the phenomenon of groupthink, alleged to have occurred, notoriously, Groupthink by Compulsion roupthink at least implies voluntarism. When this fails, the organization is not above outright intimidation, in, refusal by the new hires to cheer on command incurred consequences not unlike the indoctrination and brainwashing techniques associated with a Soviet-era gulag. To make groupthink testable, Irving Janis devised eight symptoms indicative of groupthink, type I, Overestimations of the group — its power and morality Illusions of invulnerability creating excessive optimism and encouraging risk taking. Unquestioned belief in the morality of the group, causing members to ignore the consequences of their actions, type II, Closed-mindedness Rationalizing warnings that might challenge the groups assumptions. Stereotyping those who are opposed to the group as weak, evil, biased, spiteful, impotent, type III, Pressures toward uniformity Self-censorship of ideas that deviate from the apparent group consensus
Groupthink
–
From "Groupthink" by William H. Whyte, Jr. in Fortune magazine, March 1952
46.
Reliability (statistics)
–
Reliability in statistics and psychometrics is the overall consistency of a measure. A measure is said to have a high reliability if it produces similar results under consistent conditions and it is the characteristic of a set of test scores that relates to the amount of random error from the measurement process that might be embedded in the scores. Scores that are reliable are accurate, reproducible, and consistent from one testing occasion to another. That is, if the process were repeated with a group of test takers. Various kinds of reliability coefficients, with values ranging between 0.00 and 1.00, are used to indicate the amount of error in the scores. For example, measurements of height and weight are often extremely reliable. There are several classes of reliability estimates, Inter-rater reliability assesses the degree of agreement between two or more raters in their appraisals. Test-retest reliability assesses the degree to which test scores are consistent from one test administration to the next, measurements are gathered from a single rater who uses the same methods or instruments and the same testing conditions. Inter-method reliability assesses the degree to which test scores are consistent when there is a variation in the methods or instruments used and this allows inter-rater reliability to be ruled out. When dealing with forms, it may be termed parallel-forms reliability, internal consistency reliability, assesses the consistency of results across items within a test. That is, a measure that is measuring something consistently is not necessarily measuring what you want to be measuring. For example, while there are many tests of specific abilities, not all of them would be valid for predicting, say. While reliability does not imply validity, reliability does place a limit on the validity of a test. A test that is not perfectly reliable cannot be perfectly valid, while a reliable test may provide useful valid information, a test that is not reliable cannot possibly be valid. For example, if a set of weighing scales consistently measured the weight of an object as 500 grams over the weight, then the scale would be very reliable. For the scale to be valid, it should return the weight of an object. This example demonstrates that a reliable measure is not necessarily valid. In practice, testing measures are never perfectly consistent, theories of test reliability have been developed to estimate the effects of inconsistency on the accuracy of measurement
Reliability (statistics)
–
Validity & Reliability
47.
Natural language processing
–
The history of NLP generally starts in the 1950s, although work can be found from earlier periods. In 1950, Alan Turing published an article titled Computing Machinery, the Georgetown experiment in 1954 involved fully automatic translation of more than sixty Russian sentences into English. The authors claimed that three or five years, machine translation would be a solved problem. Little further research in translation was conducted until the late 1980s. Using almost no information about human thought or emotion, ELIZA sometimes provided a startlingly human-like interaction, when the patient exceeded the very small knowledge base, ELIZA might provide a generic response, for example, responding to My head hurts with Why do you say your head hurts. During the 1970s many programmers began to write conceptual ontologies, which structured real-world information into computer-understandable data, examples are MARGIE, SAM, PAM, TaleSpin, QUALM, Politics, and Plot Units. During this time, many chatterbots were written including PARRY, Racter, up to the 1980s, most NLP systems were based on complex sets of hand-written rules. Starting in the late 1980s, however, there was a revolution in NLP with the introduction of machine learning algorithms for language processing, some of the earliest-used machine learning algorithms, such as decision trees, produced systems of hard if-then rules similar to existing hand-written rules. The cache language models upon which many speech recognition systems now rely are examples of statistical models. Many of the early successes occurred in the field of machine translation, due especially to work at IBM Research. However, most other systems depended on corpora specifically developed for the tasks implemented by these systems, as a result, a great deal of research has gone into methods of more effectively learning from limited amounts of data. Recent research has focused on unsupervised and semi-supervised learning algorithms. Such algorithms are able to learn from data that has not been hand-annotated with the desired answers, generally, this task is much more difficult than supervised learning, and typically produces less accurate results for a given amount of input data. However, there is an amount of non-annotated data available. Since the so-called statistical revolution in the late 1980s and mid 1990s, formerly, many language-processing tasks typically involved the direct hand coding of rules, which is not in general robust to natural language variation. The machine-learning paradigm calls instead for using statistical inference to automatically learn such rules through the analysis of large corpora of typical real-world examples, Many different classes of machine learning algorithms have been applied to NLP tasks. These algorithms take as input a set of features that are generated from the input data. Some of the algorithms, such as decision trees, produced systems of hard if-then rules similar to the systems of hand-written rules that were then common
Natural language processing
–
An automated online assistant providing customer service on a web page, an example of an application where natural language processing is a major component.
48.
Function (mathematics)
–
In mathematics, a function is a relation between a set of inputs and a set of permissible outputs with the property that each input is related to exactly one output. An example is the function that each real number x to its square x2. The output of a function f corresponding to a x is denoted by f. In this example, if the input is −3, then the output is 9, likewise, if the input is 3, then the output is also 9, and we may write f =9. The input variable are sometimes referred to as the argument of the function, Functions of various kinds are the central objects of investigation in most fields of modern mathematics. There are many ways to describe or represent a function, some functions may be defined by a formula or algorithm that tells how to compute the output for a given input. Others are given by a picture, called the graph of the function, in science, functions are sometimes defined by a table that gives the outputs for selected inputs. A function could be described implicitly, for example as the inverse to another function or as a solution of a differential equation, sometimes the codomain is called the functions range, but more commonly the word range is used to mean, instead, specifically the set of outputs. For example, we could define a function using the rule f = x2 by saying that the domain and codomain are the numbers. The image of this function is the set of real numbers. In analogy with arithmetic, it is possible to define addition, subtraction, multiplication, another important operation defined on functions is function composition, where the output from one function becomes the input to another function. Linking each shape to its color is a function from X to Y, each shape is linked to a color, there is no shape that lacks a color and no shape that has more than one color. This function will be referred to as the color-of-the-shape function, the input to a function is called the argument and the output is called the value. The set of all permitted inputs to a function is called the domain of the function. Thus, the domain of the function is the set of the four shapes. The concept of a function does not require that every possible output is the value of some argument, a second example of a function is the following, the domain is chosen to be the set of natural numbers, and the codomain is the set of integers. The function associates to any number n the number 4−n. For example, to 1 it associates 3 and to 10 it associates −6, a third example of a function has the set of polygons as domain and the set of natural numbers as codomain
Function (mathematics)
–
A function f takes an input x, and returns a single output f (x). One metaphor describes the function as a "machine" or " black box " that for each input returns a corresponding output.
49.
Independence (probability theory)
–
In probability theory, two events are independent, statistically independent, or stochastically independent if the occurrence of one does not affect the probability of occurrence of other. Similarly, two variables are independent if the realization of one does not affect the probability distribution of the other. Two events A and B are independent if their joint probability equals the product of their probabilities, although the derived expressions may seem more intuitive, they are not the preferred definition, as the conditional probabilities may be undefined if P or P are 0. Furthermore, the preferred definition makes clear by symmetry that when A is independent of B, B is also independent of A. A finite set of events is independent if every pair of events is independent—that is, if. A finite set of events is independent if every event is independent of any intersection of the other events—that is, if and only if for every n-element subset. This is called the rule for independent events. Note that it is not a condition involving only the product of all the probabilities of all single events. For more than two events, an independent set of events is pairwise independent, but the converse is not necessarily true. Two random variables X and Y are independent if and only if the elements of the π-system generated by them are independent, that is to say, for every a and b, the events and are independent events. A set of variables is pairwise independent if and only if every pair of random variables is independent. A set of variables is mutually independent if and only if for any finite subset X1, …, X n and any finite sequence of numbers a 1, …, a n. The measure-theoretically inclined may prefer to substitute events for events in the above definition and that definition is exactly equivalent to the one above when the values of the random variables are real numbers. It has the advantage of working also for complex-valued random variables or for random variables taking values in any measurable space. Intuitively, two random variables X and Y are conditionally independent given Z if, once Z is known, for instance, two measurements X and Y of the same underlying quantity Z are not independent, but they are conditionally independent given Z. The formal definition of independence is based on the idea of conditional distributions. If X, Y, and Z are discrete random variables, if X and Y are conditionally independent given Z, then P = P for any x, y and z with P >0. That is, the distribution for X given Y and Z is the same as that given Z alone
Independence (probability theory)
–
Pairwise independent, but not mutually independent, events.
50.
Continuous random variable
–
For instance, if the random variable X is used to denote the outcome of a coin toss, then the probability distribution of X would take the value 0.5 for X = heads, and 0.5 for X = tails. In more technical terms, the probability distribution is a description of a phenomenon in terms of the probabilities of events. Examples of random phenomena can include the results of an experiment or survey, a probability distribution is defined in terms of an underlying sample space, which is the set of all possible outcomes of the random phenomenon being observed. The sample space may be the set of numbers or a higher-dimensional vector space, or it may be a list of non-numerical values, for example. Probability distributions are divided into two classes. A discrete probability distribution can be encoded by a discrete list of the probabilities of the outcomes, on the other hand, a continuous probability distribution is typically described by probability density functions. The normal distribution represents a commonly encountered continuous probability distribution, more complex experiments, such as those involving stochastic processes defined in continuous time, may demand the use of more general probability measures. A probability distribution whose sample space is the set of numbers is called univariate. Important and commonly encountered univariate probability distributions include the distribution, the hypergeometric distribution. The multivariate normal distribution is a commonly encountered multivariate distribution, to define probability distributions for the simplest cases, one needs to distinguish between discrete and continuous random variables. For example, the probability that an object weighs exactly 500 g is zero. Continuous probability distributions can be described in several ways, the cumulative distribution function is the antiderivative of the probability density function provided that the latter function exists. As probability theory is used in diverse applications, terminology is not uniform. The following terms are used for probability distribution functions, Distribution. Probability distribution, is a table that displays the probabilities of outcomes in a sample. Could be called a frequency distribution table, where all occurrences of outcomes sum to 1. Distribution function, is a form of frequency distribution table. Probability distribution function, is a form of probability distribution table
Continuous random variable
–
The probability mass function (pmf) p (S) specifies the probability distribution for the sum S of counts from two dice. For example, the figure shows that p (11) = 1/18. The pmf allows the computation of probabilities of events such as P (S > 9) = 1/12 + 1/18 + 1/36 = 1/6, and all other probabilities in the distribution.
51.
Inverse probability
–
In probability theory, inverse probability is an obsolete term for the probability distribution of an unobserved variable. The development of the field and terminology from inverse probability to Bayesian probability is described by Fienberg, the term inverse probability appears in an 1837 paper of De Morgan, in reference to Laplaces method of probability, though the term inverse probability does not occur in these. Later Jeffreys uses the term in his defense of the methods of Bayes and Laplace, the term Bayesian, which displaced inverse probability, was introduced by Ronald Fisher around 1950. Inverse probability, variously interpreted, was the dominant approach to statistics until the development of frequentism in the early 20th century by Ronald Fisher, Jerzy Neyman and Egon Pearson. Following the development of frequentism, the terms frequentist and Bayesian developed to contrast these approaches, the distribution p itself is called the direct probability. The inverse probability problem was the problem of estimating a parameter from experimental data in the sciences, especially astronomy. A simple example would be the problem of estimating the position of a star in the sky for purposes of navigation, given the data, one must estimate the true position. This problem would now be considered one of inferential statistics, the terms direct probability and inverse probability were in use until the middle part of the 20th century, when the terms likelihood function and posterior distribution became prevalent
Inverse probability
–
Ronald Fisher
52.
Chaos theory
–
Chaos theory is a branch of mathematics focused on the behavior of dynamical systems that are highly sensitive to initial conditions. This happens even though these systems are deterministic, meaning that their behavior is fully determined by their initial conditions. In other words, the nature of these systems does not make them predictable. This behavior is known as chaos, or simply chaos. The theory was summarized by Edward Lorenz as, Chaos, When the present determines the future, Chaotic behavior exists in many natural systems, such as weather and climate. It also occurs spontaneously in some systems with components, such as road traffic. This behavior can be studied through analysis of a mathematical model, or through analytical techniques such as recurrence plots. Chaos theory has applications in several disciplines, including meteorology, sociology, physics, environmental science, computer science, engineering, economics, biology, ecology, the theory formed the basis for such fields of study as complex dynamical systems, edge of chaos theory, self-assembly process. Chaos theory concerns deterministic systems whose behavior can in principle be predicted, Chaotic systems are predictable for a while and then appear to become random. Some examples of Lyapunov times are, chaotic electrical circuits, about 1 millisecond, weather systems, a few days, in chaotic systems, the uncertainty in a forecast increases exponentially with elapsed time. Hence, mathematically, doubling the forecast time more than squares the proportional uncertainty in the forecast and this means, in practice, a meaningful prediction cannot be made over an interval of more than two or three times the Lyapunov time. When meaningful predictions cannot be made, the system appears random, in common usage, chaos means a state of disorder. However, in theory, the term is defined more precisely. Although no universally accepted mathematical definition of chaos exists, a commonly used definition originally formulated by Robert L, in these cases, while it is often the most practically significant property, sensitivity to initial conditions need not be stated in the definition. If attention is restricted to intervals, the second property implies the other two, an alternative, and in general weaker, definition of chaos uses only the first two properties in the above list. Sensitivity to initial conditions means that each point in a system is arbitrarily closely approximated by other points with significantly different future paths. Thus, a small change, or perturbation, of the current trajectory may lead to significantly different future behavior. C. Entitled Predictability, Does the Flap of a Butterflys Wings in Brazil set off a Tornado in Texas, the flapping wing represents a small change in the initial condition of the system, which causes a chain of events leading to large-scale phenomena
Chaos theory
–
The Lorenz attractor displays chaotic behavior. These two plots demonstrate sensitive dependence on initial conditions within the region of phase space occupied by the attractor.
Chaos theory
–
A plot of Lorenz attractor for values r = 28, σ = 10, b = 8/3
Chaos theory
–
Turbulence in the tip vortex from an airplane wing. Studies of the critical point beyond which a system creates turbulence were important for chaos theory, analyzed for example by the Soviet physicist Lev Landau, who developed the Landau-Hopf theory of turbulence. David Ruelle and Floris Takens later predicted, against Landau, that fluid turbulence could develop through a strange attractor, a main concept of chaos theory.
Chaos theory
–
A conus textile shell, similar in appearance to Rule 30, a cellular automaton with chaotic behaviour.
53.
Wave function
–
A wave function in quantum physics is a description of the quantum state of a system. The wave function is a probability amplitude, and the probabilities for the possible results of measurements made on the system can be derived from it. The most common symbols for a function are the Greek letters ψ or Ψ. The wave function is a function of the degrees of freedom corresponding to some set of commuting observables. Once such a representation is chosen, the function can be derived from the quantum state. For a given system, the choice of which commuting degrees of freedom to use is not unique, some particles, like electrons and photons, have nonzero spin, and the wave function for such particles includes spin as an intrinsic, discrete degree of freedom. Other discrete variables can also be included, such as isospin and these values are often displayed in a column matrix. According to the principle of quantum mechanics, wave functions can be added together and multiplied by complex numbers to form new wave functions. The Schrödinger equation determines how wave functions evolve over time, a wave function behaves qualitatively like other waves, such as water waves or waves on a string, because the Schrödinger equation is mathematically a type of wave equation. This explains the name wave function, and gives rise to wave–particle duality, however, the wave function in quantum mechanics describes a kind of physical phenomenon, still open to different interpretations, which fundamentally differs from that of classic mechanical waves. The integral of this quantity, over all the degrees of freedom. This general requirement a wave function must satisfy is called the normalization condition, since the wave function is complex valued, only its relative phase and relative magnitude can be measured. In 1905 Einstein postulated the proportionality between the frequency of a photon and its energy, E = hf, and in 1916 the corresponding relation between photon momentum and wavelength, λ = h/p, the equations represent wave–particle duality for both massless and massive particles. In the 1920s and 1930s, quantum mechanics was developed using calculus and those who used the techniques of calculus included Louis de Broglie, Erwin Schrödinger, and others, developing wave mechanics. Those who applied the methods of linear algebra included Werner Heisenberg, Max Born, Schrödinger subsequently showed that the two approaches were equivalent. However, no one was clear on how to interpret it, at first, Schrödinger and others thought that wave functions represent particles that are spread out with most of the particle being where the wave function is large. This was shown to be incompatible with the scattering of a wave packet representing a particle off a target. While a scattered particle may scatter in any direction, it not break up
Wave function
–
The electron probability density for the first few hydrogen atom electron orbitals shown as cross-sections. These orbitals form an orthonormal basis for the wave function of the electron. Different orbitals are depicted with different scale.
54.
Albert Einstein
–
Albert Einstein was a German-born theoretical physicist. He developed the theory of relativity, one of the two pillars of modern physics, Einsteins work is also known for its influence on the philosophy of science. Einstein is best known in popular culture for his mass–energy equivalence formula E = mc2, near the beginning of his career, Einstein thought that Newtonian mechanics was no longer enough to reconcile the laws of classical mechanics with the laws of the electromagnetic field. This led him to develop his theory of relativity during his time at the Swiss Patent Office in Bern. Briefly before, he aquired the Swiss citizenship in 1901, which he kept for his whole life and he continued to deal with problems of statistical mechanics and quantum theory, which led to his explanations of particle theory and the motion of molecules. He also investigated the properties of light which laid the foundation of the photon theory of light. In 1917, Einstein applied the theory of relativity to model the large-scale structure of the universe. He was visiting the United States when Adolf Hitler came to power in 1933 and, being Jewish, did not go back to Germany and he settled in the United States, becoming an American citizen in 1940. This eventually led to what would become the Manhattan Project, Einstein supported defending the Allied forces, but generally denounced the idea of using the newly discovered nuclear fission as a weapon. Later, with the British philosopher Bertrand Russell, Einstein signed the Russell–Einstein Manifesto, Einstein was affiliated with the Institute for Advanced Study in Princeton, New Jersey, until his death in 1955. Einstein published more than 300 scientific papers along with over 150 non-scientific works, on 5 December 2014, universities and archives announced the release of Einsteins papers, comprising more than 30,000 unique documents. Einsteins intellectual achievements and originality have made the word Einstein synonymous with genius, Albert Einstein was born in Ulm, in the Kingdom of Württemberg in the German Empire, on 14 March 1879. His parents were Hermann Einstein, a salesman and engineer, the Einsteins were non-observant Ashkenazi Jews, and Albert attended a Catholic elementary school in Munich from the age of 5 for three years. At the age of 8, he was transferred to the Luitpold Gymnasium, the loss forced the sale of the Munich factory. In search of business, the Einstein family moved to Italy, first to Milan, when the family moved to Pavia, Einstein stayed in Munich to finish his studies at the Luitpold Gymnasium. His father intended for him to electrical engineering, but Einstein clashed with authorities and resented the schools regimen. He later wrote that the spirit of learning and creative thought was lost in strict rote learning, at the end of December 1894, he travelled to Italy to join his family in Pavia, convincing the school to let him go by using a doctors note. During his time in Italy he wrote an essay with the title On the Investigation of the State of the Ether in a Magnetic Field
Albert Einstein
–
Albert Einstein in 1921
Albert Einstein
–
Einstein at the age of 3 in 1882
Albert Einstein
–
Albert Einstein in 1893 (age 14)
Albert Einstein
–
Einstein's matriculation certificate at the age of 17, showing his final grades from the Argovian cantonal school (Aargauische Kantonsschule, on a scale of 1–6, with 6 being the highest possible mark)
55.
Statistical
–
Statistics is a branch of mathematics dealing with the collection, analysis, interpretation, presentation, and organization of data. In applying statistics to, e. g. a scientific, industrial, or social problem, populations can be diverse topics such as all people living in a country or every atom composing a crystal. Statistics deals with all aspects of data including the planning of data collection in terms of the design of surveys, statistician Sir Arthur Lyon Bowley defines statistics as Numerical statements of facts in any department of inquiry placed in relation to each other. When census data cannot be collected, statisticians collect data by developing specific experiment designs, representative sampling assures that inferences and conclusions can safely extend from the sample to the population as a whole. In contrast, an observational study does not involve experimental manipulation, inferences on mathematical statistics are made under the framework of probability theory, which deals with the analysis of random phenomena. A standard statistical procedure involves the test of the relationship between two data sets, or a data set and a synthetic data drawn from idealized model. A hypothesis is proposed for the relationship between the two data sets, and this is compared as an alternative to an idealized null hypothesis of no relationship between two data sets. Rejecting or disproving the hypothesis is done using statistical tests that quantify the sense in which the null can be proven false. Working from a hypothesis, two basic forms of error are recognized, Type I errors and Type II errors. Multiple problems have come to be associated with this framework, ranging from obtaining a sufficient sample size to specifying an adequate null hypothesis, measurement processes that generate statistical data are also subject to error. Many of these errors are classified as random or systematic, the presence of missing data or censoring may result in biased estimates and specific techniques have been developed to address these problems. Statistics continues to be an area of research, for example on the problem of how to analyze Big data. Statistics is a body of science that pertains to the collection, analysis, interpretation or explanation. Some consider statistics to be a mathematical science rather than a branch of mathematics. While many scientific investigations make use of data, statistics is concerned with the use of data in the context of uncertainty, mathematical techniques used for this include mathematical analysis, linear algebra, stochastic analysis, differential equations, and measure-theoretic probability theory. In applying statistics to a problem, it is practice to start with a population or process to be studied. Populations can be diverse topics such as all living in a country or every atom composing a crystal. Ideally, statisticians compile data about the entire population and this may be organized by governmental statistical institutes
Statistical
–
Scatter plots are used in descriptive statistics to show the observed relationships between different variables.
Statistical
–
More probability density is found as one gets closer to the expected (mean) value in a normal distribution. Statistics used in standardized testing assessment are shown. The scales include standard deviations, cumulative percentages, percentile equivalents, Z-scores, T-scores, standard nines, and percentages in standard nines.
Statistical
–
Gerolamo Cardano, the earliest pioneer on the mathematics of probability.
Statistical
–
Karl Pearson, a founder of mathematical statistics.
56.
Quantum decoherence
–
Quantum decoherence is the loss of quantum coherence. In quantum mechanics, particles such as electrons behave like waves and are described by a wavefunction and these waves can interfere, leading to the peculiar behaviour of quantum particles. As long as there exists a definite relation between different states, the system is said to be coherent. This coherence is a property of quantum mechanics, and is necessary for the function of quantum computers. However, when a system is not perfectly isolated, but in contact with its surroundings, the coherence decays with time. As a result of this process, the behaviour is lost. Decoherence was first introduced in 1970 by the German physicist H. Dieter Zeh and has been a subject of research since the 1980s. Decoherence can be viewed as the loss of information from a system into the environment, viewed in isolation, the systems dynamics are non-unitary. Thus the dynamics of the system alone are irreversible, as with any coupling, entanglements are generated between the system and environment. These have the effect of sharing quantum information with—or transferring it to—the surroundings, Decoherence has been used to understand the collapse of the wavefunction in quantum mechanics. Decoherence does not generate actual wave function collapse and it only provides an explanation for the observation of wave function collapse, as the quantum nature of the system leaks into the environment. That is, components of the wavefunction are decoupled from a coherent system, a total superposition of the global or universal wavefunction still exists, but its ultimate fate remains an interpretational issue. Specifically, decoherence does not attempt to explain the measurement problem, rather, decoherence provides an explanation for the transition of the system to a mixture of states that seem to correspond to those states observers perceive. Decoherence represents a challenge for the realization of quantum computers. Simply put, they require that coherent states be preserved and that decoherence is managed, to examine how decoherence operates, an intuitive model is presented. The model requires some familiarity with quantum theory basics, analogies are made between visualisable classical phase spaces and Hilbert spaces. A more rigorous derivation in Dirac notation shows how decoherence destroys interference effects, next, the density matrix approach is presented for perspective. An N-particle system can be represented in non-relativistic quantum mechanics by a wavefunction, ψ and this has analogies with the classical phase space
Quantum decoherence
57.
Heuristics in judgment and decision-making
–
In psychology, heuristics are simple, efficient rules which people often use to form judgments and make decisions. They are mental shortcuts that usually involve focusing on one aspect of a complex problem and these rules work well under most circumstances, but they can lead to systematic deviations from logic, probability or rational choice theory. The resulting errors are called cognitive biases and many different types have been documented and these have been shown to affect peoples choices in situations like valuing a house, deciding the outcome of a legal case, or making an investment decision. Heuristics usually govern automatic, intuitive judgments but can also be used as deliberate mental strategies when working from limited information, Cognitive scientist Herbert A. Simon originally proposed that human judgments are limited by available information, time contraints, and cognitive limitations, calling this bounded rationality. In the early 1970s, psychologists Amos Tversky and Daniel Kahneman demonstrated three heuristics that underlie a range of intuitive judgments. These findings set in motion the heuristics and biases research program, which studies how people make real-world judgments and this research challenged the idea that human beings are rational actors, but provided a theory of information processing to explain how people make estimates or choices. This heuristics-and-biases tradition has been criticised by Gerd Gigerenzer and others for being too focused on how heuristics lead to errors, the critics argue that heuristics can be seen as rational in an underlying sense. According to this perspective, heuristics are good enough for most purposes without being too demanding on the brains resources, another theoretical perspective sees heuristics as fully rational in that they are rapid, can be made without full information and can be as accurate as more complicated procedures. By understanding the role of heuristics in human psychology, marketers and other persuaders can influence decisions, in their initial research, Tversky and Kahneman proposed three heuristics—availability, representativeness, and anchoring and adjustment. Subsequent work has identified many more, Heuristics that underlie judgment are called judgment heuristics. Another type, called heuristics, are used to judge the desirability of possible choices. In psychology, availability is the ease with which an idea can be brought to mind. When people estimate how likely or how frequent an event is on the basis of its availability, when an infrequent event can be brought easily and vividly to mind, people tend to overestimate its likelihood. For example, people overestimate their likelihood of dying in an event such as a tornado or terrorism. Dramatic, violent deaths are more highly publicised and therefore have a higher availability. On the other hand, common but mundane events are hard to bring to mind and these include deaths from suicides, strokes, and diabetes. This heuristic is one of the reasons why people are easily swayed by a single. It may also play a role in the appeal of lotteries, to buying a ticket
Heuristics in judgment and decision-making
–
The amount of money people will pay in an auction for a bottle of wine can be influenced by considering an arbitrary two-digit number.
Heuristics in judgment and decision-making
–
A visual example of attribute substitution. This illusion works because the 2D size of parts of the scene is judged on the basis of 3D (perspective) size, which is rapidly calculated by the visual system.
58.
Webster's Dictionary
–
The term Websters has become a generic trademark in the U. S. for dictionaries of the English language. For this reason the term may refer to any dictionary at all that chooses to use the name. Also, Websters is often used to refer to a generic dictionary, Noah Webster, the author of the readers and spelling books that dominated the American market at the time, spent decades of research in compiling his dictionaries. His first dictionary, A Compendious Dictionary of the English Language, Webster was a proponent of English spelling reform for reasons both philological and nationalistic. In A Companion to the American Revolution, John Algeo notes and he was very influential in popularizing certain spellings in America, but he did not originate them. Rather he chose already existing options such as center, color and check on such grounds as simplicity, in William Shakespeares first folios, for example, spellings such as center and color are the most common. He spent the two decades working to expand his dictionary. In 1828, at the age of 70, Noah Webster published his American Dictionary of the English Language in two volumes containing 70,000 entries, as against the 58,000 of any previous dictionary. There were 2,500 copies printed, at $20 for the two volumes, at first the set sold poorly. When he lowered the price to $15, its sales improved, not all copies were bound at the same time, the book also appeared in publishers boards, other original bindings of a later date are not unknown. In 1841, 82-year-old Noah Webster published an edition of his lexicographical masterpiece with the help of his son. Its title page does not claim the status of second edition, B. L. Hamlen of New Haven, Connecticut, prepared the 1841 printing of the second edition. When Webster died, his heirs sold unbound sheets of his 1841 revision American Dictionary of the English Language to the firm of J. S. & C. Adams of Amherst, Massachusetts. This firm bound and published a number of copies in 1844 – the same edition that Emily Dickinson used as a tool for her poetic composition. However, a $15 price tag on the book made it too expensive to sell easily, Merriam acquired rights from Adams, as well as signing a contract with Webster’s heirs for sole rights. The third printing of the edition was by George and Charles Merriam of Springfield, Massachusetts. This was the first Websters Dictionary with a Merriam imprint, lepore demonstrates Websters innovative ideas about language and politics and shows why Websters endeavours were at first so poorly received. Culturally conservative Federalists denounced the work as radical—too inclusive in its lexicon, meanwhile, Websters old foes, the Jeffersonian Republicans, attacked the man, labelling him mad for such an undertaking
Webster's Dictionary
–
Noah Webster
Webster's Dictionary
–
Extract from the Orthography section of the first edition, which popularized the American standard spellings of -er (6); -or (7); dropped -e (8); -or (10); -se (11); doubling consonants with suffix (15)
Webster's Dictionary
–
President Theodore Roosevelt was criticized for supporting the simplified spelling campaign of Andrew Carnegie in 1906
Webster's Dictionary
–
Merriam-Webster’s eleventh edition of the Collegiate Dictionary
59.
Sample space
–
In probability theory, the sample space of an experiment or random trial is the set of all possible outcomes or results of that experiment. A sample space is denoted using set notation, and the possible outcomes are listed as elements in the set. It is common to refer to a space by the labels S, Ω. For example, if the experiment is tossing a coin, the space is typically the set. For tossing two coins, the sample space would be. For tossing a single six-sided die, the sample space is. A well-defined sample space is one of three elements in a probabilistic model, the other two are a well-defined set of possible events and a probability assigned to each event. For many experiments, there may be more than one plausible sample space available, for example, when drawing a card from a standard deck of fifty-two playing cards, one possibility for the sample space could be the various ranks, while another could be the suits. Still other sample spaces are possible, such as if some cards have been flipped when shuffling, some treatments of probability assume that the various outcomes of an experiment are always defined so as to be equally likely. The result of this is every possible combination of individuals who could be chosen for the sample is also equally likely. In an elementary approach to probability, any subset of the space is usually called an event. However, this rise to problems when the sample space is infinite. Under this definition only measurable subsets of the space, constituting a σ-algebra over the sample space itself, are considered events. Probability space Space Set Event σ-algebra
Sample space
–
Flipping a coin leads to a sample space composed of two outcomes that are almost equally likely.
Sample space
–
Up or down? Flipping a brass tack leads to a sample space composed of two outcomes that are not equally likely.
60.
International Standard Book Number
–
The International Standard Book Number is a unique numeric commercial book identifier. An ISBN is assigned to each edition and variation of a book, for example, an e-book, a paperback and a hardcover edition of the same book would each have a different ISBN. The ISBN is 13 digits long if assigned on or after 1 January 2007, the method of assigning an ISBN is nation-based and varies from country to country, often depending on how large the publishing industry is within a country. The initial ISBN configuration of recognition was generated in 1967 based upon the 9-digit Standard Book Numbering created in 1966, the 10-digit ISBN format was developed by the International Organization for Standardization and was published in 1970 as international standard ISO2108. Occasionally, a book may appear without a printed ISBN if it is printed privately or the author does not follow the usual ISBN procedure, however, this can be rectified later. Another identifier, the International Standard Serial Number, identifies periodical publications such as magazines, the ISBN configuration of recognition was generated in 1967 in the United Kingdom by David Whitaker and in 1968 in the US by Emery Koltay. The 10-digit ISBN format was developed by the International Organization for Standardization and was published in 1970 as international standard ISO2108, the United Kingdom continued to use the 9-digit SBN code until 1974. The ISO on-line facility only refers back to 1978, an SBN may be converted to an ISBN by prefixing the digit 0. For example, the edition of Mr. J. G. Reeder Returns, published by Hodder in 1965, has SBN340013818 -340 indicating the publisher,01381 their serial number. This can be converted to ISBN 0-340-01381-8, the check digit does not need to be re-calculated, since 1 January 2007, ISBNs have contained 13 digits, a format that is compatible with Bookland European Article Number EAN-13s. An ISBN is assigned to each edition and variation of a book, for example, an ebook, a paperback, and a hardcover edition of the same book would each have a different ISBN. The ISBN is 13 digits long if assigned on or after 1 January 2007, a 13-digit ISBN can be separated into its parts, and when this is done it is customary to separate the parts with hyphens or spaces. Separating the parts of a 10-digit ISBN is also done with either hyphens or spaces, figuring out how to correctly separate a given ISBN number is complicated, because most of the parts do not use a fixed number of digits. ISBN issuance is country-specific, in that ISBNs are issued by the ISBN registration agency that is responsible for country or territory regardless of the publication language. Some ISBN registration agencies are based in national libraries or within ministries of culture, in other cases, the ISBN registration service is provided by organisations such as bibliographic data providers that are not government funded. In Canada, ISBNs are issued at no cost with the purpose of encouraging Canadian culture. In the United Kingdom, United States, and some countries, where the service is provided by non-government-funded organisations. Australia, ISBNs are issued by the library services agency Thorpe-Bowker
International Standard Book Number
–
A 13-digit ISBN, 978-3-16-148410-0, as represented by an EAN-13 bar code
61.
ArXiv
–
In many fields of mathematics and physics, almost all scientific papers are self-archived on the arXiv repository. Begun on August 14,1991, arXiv. org passed the half-million article milestone on October 3,2008, by 2014 the submission rate had grown to more than 8,000 per month. The arXiv was made possible by the low-bandwidth TeX file format, around 1990, Joanne Cohn began emailing physics preprints to colleagues as TeX files, but the number of papers being sent soon filled mailboxes to capacity. Additional modes of access were added, FTP in 1991, Gopher in 1992. The term e-print was quickly adopted to describe the articles and its original domain name was xxx. lanl. gov. Due to LANLs lack of interest in the rapidly expanding technology, in 1999 Ginsparg changed institutions to Cornell University and it is now hosted principally by Cornell, with 8 mirrors around the world. Its existence was one of the factors that led to the current movement in scientific publishing known as open access. Mathematicians and scientists regularly upload their papers to arXiv. org for worldwide access, Ginsparg was awarded a MacArthur Fellowship in 2002 for his establishment of arXiv. The annual budget for arXiv is approximately $826,000 for 2013 to 2017, funded jointly by Cornell University Library, annual donations were envisaged to vary in size between $2,300 to $4,000, based on each institution’s usage. As of 14 January 2014,174 institutions have pledged support for the period 2013–2017 on this basis, in September 2011, Cornell University Library took overall administrative and financial responsibility for arXivs operation and development. Ginsparg was quoted in the Chronicle of Higher Education as saying it was supposed to be a three-hour tour, however, Ginsparg remains on the arXiv Scientific Advisory Board and on the arXiv Physics Advisory Committee. The lists of moderators for many sections of the arXiv are publicly available, additionally, an endorsement system was introduced in 2004 as part of an effort to ensure content that is relevant and of interest to current research in the specified disciplines. Under the system, for categories that use it, an author must be endorsed by an established arXiv author before being allowed to submit papers to those categories. Endorsers are not asked to review the paper for errors, new authors from recognized academic institutions generally receive automatic endorsement, which in practice means that they do not need to deal with the endorsement system at all. However, the endorsement system has attracted criticism for allegedly restricting scientific inquiry, perelman appears content to forgo the traditional peer-reviewed journal process, stating, If anybody is interested in my way of solving the problem, its all there – let them go and read about it. The arXiv generally re-classifies these works, e. g. in General mathematics, papers can be submitted in any of several formats, including LaTeX, and PDF printed from a word processor other than TeX or LaTeX. The submission is rejected by the software if generating the final PDF file fails, if any image file is too large. ArXiv now allows one to store and modify an incomplete submission, the time stamp on the article is set when the submission is finalized
ArXiv
–
arXiv
ArXiv
–
A screenshot of the arXiv taken in 1994, using the browser NCSA Mosaic. At the time, HTML forms were a new technology.
62.
BBC
–
The British Broadcasting Corporation is a British public service broadcaster headquartered at Broadcasting House in London, England. The total number of staff is 35,402 when part-time, flexible, the BBC is established under a Royal Charter and operates under its Agreement with the Secretary of State for Culture, Media and Sport. The fee is set by the British Government, agreed by Parliament, and used to fund the BBCs radio, TV, britains first live public broadcast from the Marconi factory in Chelmsford took place in June 1920. It was sponsored by the Daily Mails Lord Northcliffe and featured the famous Australian Soprano Dame Nellie Melba, the Melba broadcast caught the peoples imagination and marked a turning point in the British publics attitude to radio. However, this public enthusiasm was not shared in official circles where such broadcasts were held to interfere with important military and civil communications. By late 1920, pressure from these quarters and uneasiness among the staff of the licensing authority, the General Post Office, was sufficient to lead to a ban on further Chelmsford broadcasts. But by 1922, the GPO had received nearly 100 broadcast licence requests, John Reith, a Scottish Calvinist, was appointed its General Manager in December 1922 a few weeks after the company made its first official broadcast. The company was to be financed by a royalty on the sale of BBC wireless receiving sets from approved manufacturers, to this day, the BBC aims to follow the Reithian directive to inform, educate and entertain. The financial arrangements soon proved inadequate, set sales were disappointing as amateurs made their own receivers and listeners bought rival unlicensed sets. By mid-1923, discussions between the GPO and the BBC had become deadlocked and the Postmaster-General commissioned a review of broadcasting by the Sykes Committee and this was to be followed by a simple 10 shillings licence fee with no royalty once the wireless manufactures protection expired. The BBCs broadcasting monopoly was made explicit for the duration of its current broadcast licence, the BBC was also banned from presenting news bulletins before 19.00, and required to source all news from external wire services. Mid-1925 found the future of broadcasting under further consideration, this time by the Crawford committee, by now the BBC under Reiths leadership had forged a consensus favouring a continuation of the unified broadcasting service, but more money was still required to finance rapid expansion. Wireless manufacturers were anxious to exit the loss making consortium with Reith keen that the BBC be seen as a service rather than a commercial enterprise. The recommendations of the Crawford Committee were published in March the following year and were still under consideration by the GPO when the 1926 general strike broke out in May. The strike temporarily interrupted newspaper production and with restrictions on news bulletins waived the BBC suddenly became the source of news for the duration of the crisis. The crisis placed the BBC in a delicate position, the Government was divided on how to handle the BBC but ended up trusting Reith, whose opposition to the strike mirrored the PMs own. Thus the BBC was granted sufficient leeway to pursue the Governments objectives largely in a manner of its own choosing, supporters of the strike nicknamed the BBC the BFC for British Falsehood Company. Reith personally announced the end of the strike which he marked by reciting from Blakes Jerusalem signifying that England had been saved, Reith argued that trust gained by authentic impartial news could then be used
BBC
–
BBC Television Centre at White City, West London, which opened in 1960 and closed in 2013
BBC
–
BBC Pacific Quay in Glasgow, which was opened in 2007
BBC
–
BBC New Broadcasting House, London which came into use during 2012–13.
BBC
–
The headquarters of the BBC at Broadcasting House in Portland Place, London, England. This section of the building is called 'Old Broadcasting House'.
63.
Edwin Thompson Jaynes
–
Edwin Thompson Jaynes was the Wayman Crow Distinguished Professor of Physics at Washington University in St. Louis. Jaynes strongly promoted the interpretation of probability theory as an extension of logic, in 1963, together with Fred Cummings, he modeled the evolution of a two-level atom in an electromagnetic field, in a fully quantized way. This model is known as the Jaynes–Cummings model, other contributions include the mind projection fallacy. This book was published posthumously in 2003, an unofficial list of errata is hosted by Kevin S. Van Horn. Edwin Thompson Jaynes at the Mathematics Genealogy Project Edwin Thompson Jaynes, Probability Theory, The Logic of Science. Early version of Probability Theory, The Logic of Science, book no longer downloadable for copyright reasons. A comprehensive web page on E. T. Jayness life, ET Jaynes obituary at Washington university http, //bayes. wustl. edu/etj/articles/entropy. concentration. pdf Jaynes analysis of Rudolph Wolfs dice data
Edwin Thompson Jaynes
–
Edwin Thompson Jaynes (1922–1998), photo taken circa 1960.
64.
An Anthology of Chance Operations
–
An Anthology of Chance Operations was an artists book publication from the early 1960s of experimental neodada art and music composition that used John Cage inspired indeterminacy. It was edited by La Monte Young and DIY co-published in 1963 by Young, the project became the manifestation of the original impetus for establishing Fluxus. Given free rein to include whoever and whatever he wanted, Young collected a body of new and experimental music, anti art, poetry, essays and performance scores from America, Europe. The magazine, however, folded after one issue. Although it can be argued that An Anthology is not strictly a Fluxus publication, its development and it was the first collaborative publication project between people who were to become part of Fluxus, Young, Mac Low and Maciunas. The art dealer Heiner Friedrich issued an edition in 1970. Malka Safro Simone Forti Nam June Paik Terry Riley Dieter Roth James Waring Emmett Williams Christian Wolff La Monte Young Notes An Anthology of Chance Operations PDF
An Anthology of Chance Operations
–
Book cover.
65.
GNU Free Documentation License
–
The GNU Free Documentation License is a copyleft license for free documentation, designed by the Free Software Foundation for the GNU Project. It is similar to the GNU General Public License, giving readers the rights to copy, redistribute, copies may also be sold commercially, but, if produced in larger quantities, the original document or source code must be made available to the works recipient. The GFDL was designed for manuals, textbooks, other reference and instructional materials, however, it can be used for any text-based work, regardless of subject matter. For example, the online encyclopedia Wikipedia uses the GFDL for all of its text. The GFDL was released in form for feedback in September 1999. After revisions, version 1.1 was issued in March 2000, version 1.2 in November 2002, the current state of the license is version 1.3. The first discussion draft of the GNU Free Documentation License version 2 was released on September 26,2006, material licensed under the current version of the license can be used for any purpose, as long as the use meets certain conditions. All previous authors of the work must be attributed, all changes to the work must be logged. All derivative works must be licensed under the same license, the full text of the license, unmodified invariant sections as defined by the author if any, and any other added warranty disclaimers and copyright notices from previous versions must be maintained. Technical measures such as DRM may not be used to control or obstruct distribution or editing of the document, the license explicitly separates any kind of Document from Secondary Sections, which may not be integrated with the Document, but exist as front-matter materials or appendices. Secondary sections can contain information regarding the authors or publishers relationship to the subject matter, if the material is modified, its title has to be changed. The license also has provisions for the handling of front-cover and back-cover texts of books, as well as for History, Acknowledgements, Dedications and Endorsements sections. These features were added in part to make the more financially attractive to commercial publishers of software documentation. Endorsements sections are intended to be used in official standard documents, the GFDL requires the ability to copy and distribute the Document in any medium, either commercially or noncommercially and therefore is incompatible with material that excludes commercial re-use. Material that restricts commercial re-use is incompatible with the license and cannot be incorporated into the work, one example of such liberal and commercial fair use is parody. Although the two work on similar copyleft principles, the GFDL is not compatible with the Creative Commons Attribution-ShareAlike license. These exemptions allow a GFDL-based collaborative project with multiple authors to transition to the CC BY-SA3, if it was not originally published on an MMC, it can only be relicensed if it were added to an MMC before November 1,2008. To prevent the clause from being used as a general compatibility measure, at the release of version 1.3, the FSF stated that all content added before November 1,2008 to Wikipedia as an example satisfied the conditions
GNU Free Documentation License
–
The GFDL logo
66.
Logic in computer science
–
The ACM–IEEE Symposium on Logic in Computer Science is an annual academic conference on the theory and practice of computer science in relation to mathematical logic. Extended versions of selected papers of each years conference appear in renowned international journals such as Logical Methods in Computer Science, since the first installment in 1988, the cover page of the conference proceedings has featured an artwork entitled Irrational Tiling by Logical Quantifiers, by Alvy Ray Smith. Since 1995, each year the Kleene award is given to the best student paper, in addition, since 2006, the LICS Test-of-Time Award is given annually to one among the twenty-year-old LICS papers that have best met the test of time. Each year, since 2006, the LICS Test-of-Time Award recognizes those articles from LICS proceedings 20 years earlier, Kleene, is given for the best student paper. The list of computer science conferences contains other academic conferences in computer science
Logic in computer science
67.
Set theory
–
Set theory is a branch of mathematical logic that studies sets, which informally are collections of objects. Although any type of object can be collected into a set, set theory is applied most often to objects that are relevant to mathematics, the language of set theory can be used in the definitions of nearly all mathematical objects. The modern study of set theory was initiated by Georg Cantor, Set theory is commonly employed as a foundational system for mathematics, particularly in the form of Zermelo–Fraenkel set theory with the axiom of choice. Beyond its foundational role, set theory is a branch of mathematics in its own right, contemporary research into set theory includes a diverse collection of topics, ranging from the structure of the real number line to the study of the consistency of large cardinals. Mathematical topics typically emerge and evolve through interactions among many researchers, Set theory, however, was founded by a single paper in 1874 by Georg Cantor, On a Property of the Collection of All Real Algebraic Numbers. Since the 5th century BC, beginning with Greek mathematician Zeno of Elea in the West and early Indian mathematicians in the East, especially notable is the work of Bernard Bolzano in the first half of the 19th century. Modern understanding of infinity began in 1867–71, with Cantors work on number theory, an 1872 meeting between Cantor and Richard Dedekind influenced Cantors thinking and culminated in Cantors 1874 paper. Cantors work initially polarized the mathematicians of his day, while Karl Weierstrass and Dedekind supported Cantor, Leopold Kronecker, now seen as a founder of mathematical constructivism, did not. This utility of set theory led to the article Mengenlehre contributed in 1898 by Arthur Schoenflies to Kleins encyclopedia, in 1899 Cantor had himself posed the question What is the cardinal number of the set of all sets. Russell used his paradox as a theme in his 1903 review of continental mathematics in his The Principles of Mathematics, in 1906 English readers gained the book Theory of Sets of Points by William Henry Young and his wife Grace Chisholm Young, published by Cambridge University Press. The momentum of set theory was such that debate on the paradoxes did not lead to its abandonment, the work of Zermelo in 1908 and Abraham Fraenkel in 1922 resulted in the set of axioms ZFC, which became the most commonly used set of axioms for set theory. The work of such as Henri Lebesgue demonstrated the great mathematical utility of set theory. Set theory is used as a foundational system, although in some areas category theory is thought to be a preferred foundation. Set theory begins with a binary relation between an object o and a set A. If o is a member of A, the notation o ∈ A is used, since sets are objects, the membership relation can relate sets as well. A derived binary relation between two sets is the relation, also called set inclusion. If all the members of set A are also members of set B, then A is a subset of B, for example, is a subset of, and so is but is not. As insinuated from this definition, a set is a subset of itself, for cases where this possibility is unsuitable or would make sense to be rejected, the term proper subset is defined
Set theory
–
Georg Cantor
Set theory
–
A Venn diagram illustrating the intersection of two sets.
68.
A priori and a posteriori
–
The Latin phrases a priori and a posteriori are philosophical terms of art popularized by Immanuel Kants Critique of Pure Reason, one of the most influential works in the history of philosophy. These terms are used with respect to reasoning to distinguish necessary conclusions from first premises from conclusions based on sense observation, a posteriori knowledge or justification is dependent on experience or empirical evidence, as with most aspects of science and personal knowledge. There are many points of view on two types of knowledge, and their relationship gives rise to one of the oldest problems in modern philosophy. The terms a priori and a posteriori are primarily used as adjectives to modify the noun knowledge, however, a priori is sometimes used to modify other nouns, such as truth. Philosophers also may use apriority and aprioricity as nouns to refer to the quality of being a priori, although definitions and use of the terms have varied in the history of philosophy, they have consistently labeled two separate epistemological notions. See also the related distinctions, deductive/inductive, analytic/synthetic, necessary/contingent, the intuitive distinction between a priori and a posteriori knowledge is best seen in examples. A priori Consider the proposition, If George V reigned at least four days and this is something that one knows a priori, because it expresses a statement that one can derive by reason alone. A posteriori Compare this with the proposition expressed by the sentence and this is something that one must come to know a posteriori, because it expresses an empirical fact unknowable by reason alone. Several philosophers reacting to Kant sought to explain a priori knowledge without appealing to, as Paul Boghossian explains and that has never been described in satisfactory terms. One theory, popular among the positivists of the early 20th century, is what Boghossian calls the analytic explanation of the a priori. The distinction between analytic and synthetic propositions was first introduced by Kant, in short, proponents of this explanation claimed to have reduced a dubious metaphysical faculty of pure reason to a legitimate linguistic notion of analyticity. However, the explanation of a priori knowledge has undergone several criticisms. Most notably, Quine argued that the distinction is illegitimate. Quine states, But for all its a priori reasonableness, a boundary between analytic and synthetic statements simply has not been drawn and that there is such a distinction to be drawn at all is an unempirical dogma of empiricists, a metaphysical article of faith. While the soundness of Quines critique is highly disputed, it had an effect on the project of explaining the a priori in terms of the analytic. The metaphysical distinction between necessary and contingent truths has also related to a priori and a posteriori knowledge. A proposition that is true is one whose negation is self-contradictory. Consider the proposition that all bachelors are unmarried and its negation, the proposition that some bachelors are married, is incoherent, because the concept of being unmarried is part of the concept of being a bachelor
A priori and a posteriori
–
Time Portal
69.
Logical truth
–
Logical truth is one of the most fundamental concepts in logic, and there are different theories on its nature. A logical truth is a statement which is true, and remains true under all reinterpretations of its components other than its logical constants and it is a type of analytic statement. All of philosophical logic can be thought of as providing accounts of the nature of logical truth, Logical truths are truths which are considered to be necessarily true. This is to say that they are considered to be such that they could not be untrue and it must be true in every sense of intuition, practices, and bodies of beliefs. However, it is not universally agreed that there are any statements which are necessarily true, a logical truth is considered by some philosophers to be a statement which is true in all possible worlds. This is contrasted with facts which are true in this world, as it has historically unfolded, later, with the rise of formal logic a logical truth was considered to be a statement which is true under all possible interpretations. Empiricists commonly respond to this objection by arguing that logical truths, are analytic, Logical truths, being analytic statements, do not contain any information about any matters of fact. Other than logical truths, there is also a class of analytic statements. The characteristic of such a statement is that it can be turned into a logical truth by substituting synonyms for synonyms salva veritate, can be turned into No unmarried man is married. By substituting unmarried man for its synonym bachelor, in his essay, Two Dogmas of Empiricism, the philosopher W. V. O. Quine called into question the distinction between analytic and synthetic statements, in his conclusion, Quine rejects that logical truths are necessary truths. Instead he posits that the truth-value of any statement can be changed, including logical truths, considering different interpretations of the same statement leads to the notion of truth value. The simplest approach to truth values means that the statement may be true in one case, in one sense of the term tautology, it is any type of formula or proposition which turns out to be true under any possible interpretation of its terms. This is synonymous to logical truth, however, the term tautology is also commonly used to refer to what could more specifically be called truth-functional tautologies. Not all logical truths are tautologies of such a kind, Logical constants, including logical connectives and quantifiers, can all be reduced conceptually to logical truth. For instance, two statements or more are logically incompatible if, and only if their conjunction is logically false, one statement logically implies another when it is logically incompatible with the negation of the other. A statement is true if, and only if its opposite is logically false. The opposite statements must contradict one another, in this way all logical connectives can be expressed in terms of preserving logical truth
Logical truth
–
Functional:
70.
Name
–
A name is a term used for identification. Names can identify a class or category of things, or a thing, either uniquely. A personal name identifies, not necessarily uniquely, an individual human. The name of an entity is sometimes called a proper name and is, when consisting of only one word. Other nouns are sometimes called names or general names. A name can be given to a person, place, or thing, for example, caution must be exercised when translating, for there are ways that one language may prefer one type of name over another. Also, claims to preference or authority can be refuted, the British did not refer to Louis-Napoleon as Napoleon III during his rule. The word name comes from Old English nama, cognate with Old High German namo, Sanskrit नामन्, Latin nomen, Greek ὄνομα, perhaps connected to non-Indo-European terms such as Tamil namam and Proto-Uralic *nime. In the ancient world, particularly in the ancient near-east names were thought to be powerful and to act, in some ways. By invoking a god or spirit by name, one was thought to be able to summon that spirits power for some kind of miracle or magic, in the Old Testament, the names of individuals are meaningful, and a change of name indicates a change of status. For example, the patriarch Abram and his wife Sarai are renamed Abraham, simon was renamed Peter when he was given the Keys to Heaven. This is recounted in the Gospel of Matthew chapter 16, which according to Roman Catholic teaching was when Jesus promised to Saint Peter the power to take binding actions. Throughout the Bible, characters are given names at birth that reflect something of significance or describe the course of their lives, for example, Solomon meant peace, and the king with that name was the first whose reign was without war. Likewise, Joseph named his firstborn son Manasseh, when Joseph also said, “God has made me all my troubles. However, they were known as the child of their father. For example, דוד בן ישי meaning, David, son of Jesse, the Talmud also states that all those who descend to Gehenna will rise in the time of Messiah. However, there are three exceptions, one of which is he who calls another by a derisive nickname, Street names within a city may follow a naming convention, some examples include, In Manhattan, roads that cross the island from east to west are called Streets. Those that run the length of the island are called Avenues, in Ontario, numbered concession roads are east–west whereas lines are north–south routes
Name
–
A cartouche indicates that the Egyptian hieroglyphs enclosed are a royal name.
71.
List of paradoxes
–
This is a list of paradoxes, grouped thematically. The grouping is approximate, as paradoxes may fit more than one category. Because of varying definitions of the paradox, some of the following are not considered to be paradoxes by everyone. This list collects only scenarios that have called a paradox by at least one source and have their own article. Although considered paradoxes, some of these are based on fallacious reasoning, informally, the term is often used to describe a counter-intuitive result. Barbershop paradox, The supposition that if one of two simultaneous assumptions leads to a contradiction, the assumption is also disproved leads to paradoxical consequences. Not to be confused with the Barber paradox, what the Tortoise Said to Achilles, Whatever Logic is good enough to tell me is worth writing down. Also known as Carrolls paradox, not to be confused with the paradox of the same name. Catch-22, A situation in which someone is in need of something that can only be had by not being in need of it. A soldier who wants to be declared insane in order to combat is deemed not insane for that very reason. Drinker paradox, In any pub there is a customer of whom it is true to say, if that customer drinks, Paradox of entailment, Inconsistent premises always make an argument valid. Raven paradox, Observing a green apple increases the likelihood of all ravens being black, ross paradox, Disjunction introduction poses a problem for imperative inference by seemingly permitting arbitrary imperatives to be inferred. Unexpected hanging paradox, The day of the hanging will be a surprise, so it cannot happen at all, the surprise examination and Bottle Imp paradox use similar logic. Barber paradox, A barber shaves all and only men who do not shave themselves. Bhartrharis paradox, The thesis that there are things which are unnameable conflicts with the notion that something is named by calling it unnameable. Berry paradox, The phrase the first number not nameable in under ten words appears to name it in nine words, Paradox of the Court, A law student agrees to pay his teacher after winning his first case. The teacher then sues the student for payment, currys paradox, If this sentence is true, then Santa Claus exists. Epimenides paradox, A Cretan says, All Cretans are liars and this paradox works in mainly the same way as the Liar paradox
List of paradoxes
–
Abilene
List of paradoxes
–
The Monty Hall problem: which door do you choose?
72.
Integrated Authority File
–
The Integrated Authority File or GND is an international authority file for the organisation of personal names, subject headings and corporate bodies from catalogues. It is used mainly for documentation in libraries and increasingly also by archives, the GND is managed by the German National Library in cooperation with various regional library networks in German-speaking Europe and other partners. The GND falls under the Creative Commons Zero license, the GND specification provides a hierarchy of high-level entities and sub-classes, useful in library classification, and an approach to unambiguous identification of single elements. It also comprises an ontology intended for knowledge representation in the semantic web, available in the RDF format
Integrated Authority File
–
GND screenshot