1.
Belief
–
Belief is the state of mind in which a person thinks something to be the case, with or without there being empirical evidence to prove that something is the case with factual certainty. Another way of defining belief sees it as a representation of an attitude positively oriented towards the likelihood of something being true. In the context of Ancient Greek thought, two related concepts were identified with regards to the concept of belief, pistis and doxa, simplified, we may say that pistis refers to trust and confidence, while doxa refers to opinion and acceptance. The English word orthodoxy derives from doxa, Jonathan Leicester suggests that belief has the purpose of guiding action rather than indicating truth. In epistemology, philosophers use the belief to refer to personal attitudes associated with true or false ideas. However, belief does not require active introspection and circumspection, for example, we never ponder whether or not the sun will rise. We simply assume the sun will rise, since belief is an important aspect of mundane life, according to Eric Schwitzgebel in the Stanford Encyclopedia of Philosophy, a related question asks, how a physical organism can have beliefs. Epistemology is concerned with delineating the boundary between justified belief and opinion, and involved generally with a philosophical study of knowledge. The primary problem in epistemology is to exactly what is needed in order for us to have knowledge. Plato dismisses this possibility of a relation between belief and knowledge even when the one who opines grounds his belief on the rule. Among American epistemologists, Gettier and Goldman, have questioned the true belief definition. Mainstream psychology and related disciplines have traditionally treated belief as if it were the simplest form of mental representation, philosophers have tended to be more abstract in their analysis, and much of the work examining the viability of the belief concept stems from philosophical analysis. The concept of belief presumes a subject and an object of belief, Beliefs are sometimes divided into core beliefs and dispositional beliefs. For example, if asked do you believe tigers wear pink pajamas, a person might answer that they do not, despite the fact they may never have thought about this situation before. This has important implications for understanding the neuropsychology and neuroscience of belief, if the concept of belief is incoherent, then any attempt to find the underlying neural processes that support it will fail. Jerry Fodor is one of the defenders of this point of view. Most notably, philosopher Stephen Stich has argued for this understanding of belief. In these cases science hasnt provided us with a detailed account of these theories
Belief
–
We are influenced by many factors that ripple through our minds as our beliefs form, evolve, and may eventually change
Belief
–
A Venn / Euler diagram which grants that truth and belief may be distinguished and that their intersection is knowledge. Unsurprisingly, this is a controversial analysis.
Belief
–
This article is about the general concept. For other uses, see Belief (disambiguation).
Belief
–
Philosopher Jonathan Glover warns that belief systems are like whole boats in the water; it is extremely difficult to alter them all at once (e.g., it may be too stressful, or people may maintain their biases without realizing it).
2.
Determinism
–
Determinism is the philosophical position that for every event there exist conditions that could cause no other event. There are many determinisms, depending on what pre-conditions are considered to be determinative of an event or action, deterministic theories throughout the history of philosophy have sprung from diverse and sometimes overlapping motives and considerations. Some forms of determinism can be tested with ideas from physics. The opposite of determinism is some kind of indeterminism, Determinism is often contrasted with free will. Determinism often is taken to mean causal determinism, which in physics is known as cause-and-effect and it is the concept that events within a given paradigm are bound by causality in such a way that any state is completely determined by prior states. This meaning can be distinguished from varieties of determinism mentioned below. Numerous historical debates involve many philosophical positions and varieties of determinism and they include debates concerning determinism and free will, technically denoted as compatibilistic and incompatibilistic. Determinism should not be confused with self-determination of human actions by reasons, motives, Determinism rarely requires that perfect prediction be practically possible. However, causal determinism is a broad term to consider that ones deliberations, choices. Causal determinism proposes that there is a chain of prior occurrences stretching back to the origin of the universe. The relation between events may not be specified, nor the origin of that universe, causal determinists believe that there is nothing in the universe that is uncaused or self-caused. Historical determinism can also be synonymous with causal determinism, causal determinism has also been considered more generally as the idea that everything that happens or exists is caused by antecedent conditions. Yet they can also be considered metaphysical of origin. Nomological determinism is the most common form of causal determinism and it is the notion that the past and the present dictate the future entirely and necessarily by rigid natural laws, that every occurrence results inevitably from prior events. Quantum mechanics and various interpretations thereof pose a challenge to this view. Nomological determinism is sometimes illustrated by the experiment of Laplaces demon. Nomological determinism is sometimes called scientific determinism, although that is a misnomer, physical determinism is generally used synonymously with nomological determinism. Necessitarianism is closely related to the causal determinism described above and it is a metaphysical principle that denies all mere possibility, there is exactly one way for the world to be. Leucippus claimed there were no uncaused events, and that occurs for a reason
Determinism
–
Many philosophical theories of determinism frame themselves with the idea that reality follows a sort of predetermined path
Determinism
–
Adequate determinism focuses on the fact that, even without a full understanding of microscopic physics, we can predict the distribution of 1000 coin tosses
Determinism
–
Nature and nurture interact in humans. A scientist looking at a sculpture after some time does not ask whether we are seeing the effects of the starting materials or of environmental influences.
Determinism
–
A technological determinist might suggest that technology like the mobile phone is the greatest factor shaping human civilization.
3.
Fatalism
–
Fatalism is a philosophical doctrine that stresses the subjugation of all events or actions to fate. Fatalism generally refers to any of the ideas, The view that we are powerless to do anything other than what we actually do. Included in this is that man has no power to influence the future, or indeed and this belief is very similar to predeterminism. An attitude of resignation in the face of some event or events which are thought to be inevitable. Friedrich Nietzsche named this idea with Turkish fatalism in his book The Wanderer and that acceptance is appropriate, rather than resistance against inevitability. This belief is similar to defeatism. Ājīvika was a system of ancient Indian philosophy and a movement of the Mahajanapada period in the Indian subcontinent. The same sources therefore make them out to be strict fatalists, If all future occurrences are rigidly determined. Coming events may in some sense be said to exist already, the future exists in the present, and both exist in the past. Time is thus on ultimate analysis illusory, every phase of a process is always present. In a soul which has attained salvation its earthly births are still present, nothing is destroyed and nothing is produced. Not only are all things determined, but their change and development is a cosmic illusion, makkhali Gosala was an ascetic teacher of ancient India. He is regarded to have born in 484 BCE and was a contemporary of Siddhartha Gautama, the founder of Buddhism, and of Mahavira. While the terms are used interchangeably, fatalism, determinism. However, all these doctrines share common ground, determinists generally agree that human actions affect the future but that human action is itself determined by a causal chain of prior events. Their view does not accentuate a submission to fate or destiny, fatalism is a looser term than determinism. The presence of historical indeterminisms or chances, i. e. events that could not be predicted by sole knowledge of events, is an idea still compatible with fatalism. Necessity will happen just as inevitably as a chance—both can be imagined as sovereign, likewise, determinism is a broader term than predeterminism
Fatalism
–
Time Portal
4.
Hypothesis
–
A hypothesis is a proposed explanation for a phenomenon. For a hypothesis to be a scientific hypothesis, the method requires that one can test it. Scientists generally base scientific hypotheses on previous observations that cannot satisfactorily be explained with the scientific theories. Even though the hypothesis and theory are often used synonymously. A working hypothesis is a provisionally accepted hypothesis proposed for further research, P is the assumption in a What If question. Remember, the way that you prove an implication is by assuming the hypothesis, --Philip Wadler In its ancient usage, hypothesis referred to a summary of the plot of a classical drama. The English word hypothesis comes from the ancient Greek ὑπόθεσις word hupothesis, in Platos Meno, Socrates dissects virtue with a method used by mathematicians, that of investigating from a hypothesis. In this sense, hypothesis refers to an idea or to a convenient mathematical approach that simplifies cumbersome calculations. In common usage in the 21st century, a hypothesis refers to an idea whose merit requires evaluation. For proper evaluation, the framer of a hypothesis needs to define specifics in operational terms, a hypothesis requires more work by the researcher in order to either confirm or disprove it. In due course, a hypothesis may become part of a theory or occasionally may grow to become a theory itself. Normally, scientific hypotheses have the form of a mathematical model, in entrepreneurial science, a hypothesis is used to formulate provisional ideas within a business setting. The formulated hypothesis is then evaluated where either the hypothesis is proven to be true or false through a verifiability- or falsifiability-oriented Experiment, any useful hypothesis will enable predictions by reasoning. It might predict the outcome of an experiment in a setting or the observation of a phenomenon in nature. The prediction may also invoke statistics and only talk about probabilities, other philosophers of science have rejected the criterion of falsifiability or supplemented it with other criteria, such as verifiability or coherence. The scientific method involves experimentation, to test the ability of some hypothesis to adequately answer the question under investigation. In contrast, unfettered observation is not as likely to raise unexplained issues or open questions in science, a thought experiment might also be used to test the hypothesis as well. In framing a hypothesis, the investigator must not currently know the outcome of a test or that it remains reasonably under continuing investigation, only in such cases does the experiment, test or study potentially increase the probability of showing the truth of a hypothesis
Hypothesis
–
Andreas Cellarius hypothesis, demonstrating the planetary motions in eccentric and epicyclical orbits
5.
Truth
–
Truth is most often used to mean being in accord with fact or reality, or fidelity to an original or standard. Truth may also often be used in modern contexts to refer to an idea of truth to self, the commonly understood opposite of truth is falsehood, which, correspondingly, can also take on a logical, factual, or ethical meaning. The concept of truth is discussed and debated in several contexts, including philosophy, art, Some philosophers view the concept of truth as basic, and unable to be explained in any terms that are more easily understood than the concept of truth itself. Commonly, truth is viewed as the correspondence of language or thought to an independent reality, other philosophers take this common meaning to be secondary and derivative. On this view, the conception of truth as correctness is a derivation from the concepts original essence. Various theories and views of truth continue to be debated among scholars, philosophers, language and words are a means by which humans convey information to one another and the method used to determine what is a truth is termed a criterion of truth. The English word truth is derived from Old English tríewþ, tréowþ, trýwþ, Middle English trewþe, cognate to Old High German triuwida, like troth, it is a -th nominalisation of the adjective true. Old Norse trú, faith, word of honour, religious faith, thus, truth involves both the quality of faithfulness, fidelity, loyalty, sincerity, veracity, and that of agreement with fact or reality, in Anglo-Saxon expressed by sōþ. All Germanic languages besides English have introduced a distinction between truth fidelity and truth factuality. To express factuality, North Germanic opted for nouns derived from sanna to assert, affirm, while continental West Germanic opted for continuations of wâra faith, trust, pact. Romance languages use terms following the Latin veritas, while the Greek aletheia, Russian pravda, each presents perspectives that are widely shared by published scholars. However, the theories are not universally accepted. More recently developed deflationary or minimalist theories of truth have emerged as competitors to the substantive theories. Minimalist reasoning centres around the notion that the application of a term like true to a statement does not assert anything significant about it, for instance, anything about its nature. Minimalist reasoning realises truth as a label utilised in general discourse to express agreement, to stress claims, correspondence theories emphasise that true beliefs and true statements correspond to the actual state of affairs. This type of theory stresses a relationship between thoughts or statements on one hand, and things or objects on the other and it is a traditional model tracing its origins to ancient Greek philosophers such as Socrates, Plato, and Aristotle. This class of theories holds that the truth or the falsity of a representation is determined in principle entirely by how it relates to things, Aquinas also restated the theory as, A judgment is said to be true when it conforms to the external reality. Many modern theorists have stated that this ideal cannot be achieved without analysing additional factors, for example, language plays a role in that all languages have words to represent concepts that are virtually undefined in other languages
Truth
–
Time Saving Truth from Falsehood and Envy, François Lemoyne, 1737
Truth
–
Truth, holding a mirror and a serpent (1896). Olin Levi Warner, Library of Congress Thomas Jefferson Building, Washington, D.C.
Truth
–
An angel carrying the banner of "Truth", Roslin, Midlothian
Truth
–
Walter Seymour Allward 's Veritas (Truth) outside Supreme Court of Canada, Ottawa, Ontario Canada
6.
Epistemology
–
Epistemology is the branch of philosophy concerned with the theory of knowledge. Epistemology studies the nature of knowledge, justification, and the rationality of belief, the term Epistemology was first used by Scottish philosopher James Frederick Ferrier in 1854. However, according to Brett Warren, King James VI of Scotland had previously personified this philosophical concept as the character Epistemon in 1591 and this philosophical approach signified a Philomath seeking to obtain greater knowledge through epistemology with the use of theology. The dialogue was used by King James to educate society on various concepts including the history, the word epistemology is derived from the ancient Greek epistēmē meaning knowledge and the suffix -logy, meaning a logical discourse to. J. F. Ferrier coined epistemology on the model of ontology, to designate that branch of philosophy which aims to discover the meaning of knowledge, and called it the true beginning of philosophy. The word is equivalent to the concept Wissenschaftslehre, which was used by German philosophers Johann Fichte, French philosophers then gave the term épistémologie a narrower meaning as theory of knowledge. Émile Meyerson opened his Identity and Reality, written in 1908, in mathematics, it is known that 2 +2 =4, but there is also knowing how to add two numbers, and knowing a person, place, thing, or activity. Some philosophers think there is an important distinction between knowing that, knowing how, and acquaintance-knowledge, with epistemology being primarily concerned with the first of these, while these distinctions are not explicit in English, they are defined explicitly in other languages. In French, Portuguese, Spanish and Dutch to know is translated using connaître, conhecer, conocer, modern Greek has the verbs γνωρίζω and ξέρω. Italian has the verbs conoscere and sapere and the nouns for knowledge are conoscenza and sapienza, German has the verbs wissen and kennen. The verb itself implies a process, you have to go from one state to another and this verb seems to be the most appropriate in terms of describing the episteme in one of the modern European languages, hence the German name Erkenntnistheorie. The theoretical interpretation and significance of linguistic issues remains controversial. In his paper On Denoting and his later book Problems of Philosophy Bertrand Russell stressed the distinction between knowledge by description and knowledge by acquaintance, gilbert Ryle is also credited with stressing the distinction between knowing how and knowing that in The Concept of Mind. This position is essentially Ryles, who argued that a failure to acknowledge the distinction between knowledge that and knowledge how leads to infinite regress and this includes the truth, and everything else we accept as true for ourselves from a cognitive point of view. Whether someones belief is true is not a prerequisite for belief, on the other hand, if something is actually known, then it categorically cannot be false. It would not be accurate to say that he knew that the bridge was safe, because plainly it was not. By contrast, if the bridge actually supported his weight, then he might say that he had believed that the bridge was safe, whereas now, after proving it to himself, epistemologists argue over whether belief is the proper truth-bearer. Some would rather describe knowledge as a system of justified true propositions, plato, in his Gorgias, argues that belief is the most commonly invoked truth-bearer
Epistemology
–
Plato – Kant – Nietzsche
7.
Measure (mathematics)
–
In mathematical analysis, a measure on a set is a systematic way to assign a number to each suitable subset of that set, intuitively interpreted as its size. In this sense, a measure is a generalization of the concepts of length, area, for instance, the Lebesgue measure of the interval in the real numbers is its length in the everyday sense of the word – specifically,1. Technically, a measure is a function that assigns a real number or +∞ to subsets of a set X. It must further be countably additive, the measure of a subset that can be decomposed into a finite number of smaller disjoint subsets, is the sum of the measures of the smaller subsets. In general, if one wants to associate a consistent size to each subset of a set while satisfying the other axioms of a measure. This problem was resolved by defining measure only on a sub-collection of all subsets, the so-called measurable subsets and this means that countable unions, countable intersections and complements of measurable subsets are measurable. Non-measurable sets in a Euclidean space, on which the Lebesgue measure cannot be defined consistently, are complicated in the sense of being badly mixed up with their complement. Indeed, their existence is a consequence of the axiom of choice. Measure theory was developed in stages during the late 19th and early 20th centuries by Émile Borel, Henri Lebesgue, Johann Radon. The main applications of measures are in the foundations of the Lebesgue integral, in Andrey Kolmogorovs axiomatisation of probability theory, probability theory considers measures that assign to the whole set the size 1, and considers measurable subsets to be events whose probability is given by the measure. Ergodic theory considers measures that are invariant under, or arise naturally from, let X be a set and Σ a σ-algebra over X. A function μ from Σ to the real number line is called a measure if it satisfies the following properties, Non-negativity. Countable additivity, For all countable collections i =1 ∞ of pairwise disjoint sets in Σ, μ = ∑ k =1 ∞ μ One may require that at least one set E has finite measure. Then the empty set automatically has measure zero because of countable additivity, because μ = μ = μ + μ + μ + …, which implies that μ =0. If only the second and third conditions of the definition of measure above are met, the pair is called a measurable space, the members of Σ are called measurable sets. If and are two spaces, then a function f, X → Y is called measurable if for every Y-measurable set B ∈ Σ Y. A triple is called a measure space, a probability measure is a measure with total measure one – i. e. A probability space is a space with a probability measure
Measure (mathematics)
–
Informally, a measure has the property of being monotone in the sense that if A is a subset of B, the measure of A is less than or equal to the measure of B. Furthermore, the measure of the empty set is required to be 0.
8.
Event (probability theory)
–
In probability theory, an event is a set of outcomes of an experiment to which a probability is assigned. A single outcome may be an element of different events. An event defines an event, namely the complementary set. Typically, when the space is finite, any subset of the sample space is an event. However, this approach does not work well in cases where the space is uncountably infinite. So, when defining a probability space it is possible, and often necessary, to exclude certain subsets of the sample space from being events. If we assemble a deck of 52 playing cards with no jokers, an event, however, is any subset of the sample space, including any singleton set, the empty set and the sample space itself. Other events are subsets of the sample space that contain multiple elements. So, for example, potential events include, Red and black at the time without being a joker, The 5 of Hearts, A King, A Face card, A Spade, A Face card or a red suit. Since all events are sets, they are written as sets. Defining all subsets of the space as events works well when there are only finitely many outcomes. For many standard probability distributions, such as the normal distribution, attempts to define probabilities for all subsets of the real numbers run into difficulties when one considers badly behaved sets, such as those that are nonmeasurable. Hence, it is necessary to restrict attention to a limited family of subsets. The most natural choice is the Borel measurable set derived from unions and intersections of intervals, however, the larger class of Lebesgue measurable sets proves more useful in practice. In the general description of probability spaces, an event may be defined as an element of a selected σ-algebra of subsets of the sample space. Under this definition, any subset of the space that is not an element of the σ-algebra is not an event. With a reasonable specification of the probability space, however, all events of interest are elements of the σ-algebra, even though events are subsets of some sample space Ω, they are often written as propositional formulas involving random variables. For example, if X is a random variable defined on the sample space Ω
Event (probability theory)
–
A Venn diagram of an event. B is the sample space and A is an event. By the ratio of their areas, the probability of A is approximately 0.4.
9.
Statistics
–
Statistics is a branch of mathematics dealing with the collection, analysis, interpretation, presentation, and organization of data. In applying statistics to, e. g. a scientific, industrial, or social problem, populations can be diverse topics such as all people living in a country or every atom composing a crystal. Statistics deals with all aspects of data including the planning of data collection in terms of the design of surveys, statistician Sir Arthur Lyon Bowley defines statistics as Numerical statements of facts in any department of inquiry placed in relation to each other. When census data cannot be collected, statisticians collect data by developing specific experiment designs, representative sampling assures that inferences and conclusions can safely extend from the sample to the population as a whole. In contrast, an observational study does not involve experimental manipulation, inferences on mathematical statistics are made under the framework of probability theory, which deals with the analysis of random phenomena. A standard statistical procedure involves the test of the relationship between two data sets, or a data set and a synthetic data drawn from idealized model. A hypothesis is proposed for the relationship between the two data sets, and this is compared as an alternative to an idealized null hypothesis of no relationship between two data sets. Rejecting or disproving the hypothesis is done using statistical tests that quantify the sense in which the null can be proven false. Working from a hypothesis, two basic forms of error are recognized, Type I errors and Type II errors. Multiple problems have come to be associated with this framework, ranging from obtaining a sufficient sample size to specifying an adequate null hypothesis, measurement processes that generate statistical data are also subject to error. Many of these errors are classified as random or systematic, the presence of missing data or censoring may result in biased estimates and specific techniques have been developed to address these problems. Statistics continues to be an area of research, for example on the problem of how to analyze Big data. Statistics is a body of science that pertains to the collection, analysis, interpretation or explanation. Some consider statistics to be a mathematical science rather than a branch of mathematics. While many scientific investigations make use of data, statistics is concerned with the use of data in the context of uncertainty, mathematical techniques used for this include mathematical analysis, linear algebra, stochastic analysis, differential equations, and measure-theoretic probability theory. In applying statistics to a problem, it is practice to start with a population or process to be studied. Populations can be diverse topics such as all living in a country or every atom composing a crystal. Ideally, statisticians compile data about the entire population and this may be organized by governmental statistical institutes
Statistics
–
Scatter plots are used in descriptive statistics to show the observed relationships between different variables.
Statistics
–
More probability density is found as one gets closer to the expected (mean) value in a normal distribution. Statistics used in standardized testing assessment are shown. The scales include standard deviations, cumulative percentages, percentile equivalents, Z-scores, T-scores, standard nines, and percentages in standard nines.
Statistics
–
Gerolamo Cardano, the earliest pioneer on the mathematics of probability.
Statistics
–
Karl Pearson, a founder of mathematical statistics.
10.
Machine learning
–
Machine learning is the subfield of computer science that, according to Arthur Samuel in 1959, gives computers the ability to learn without being explicitly programmed. Machine learning is related to computational statistics, which also focuses on prediction-making through the use of computers. It has strong ties to optimization, which delivers methods, theory. Machine learning is sometimes conflated with data mining, where the latter subfield focuses more on data analysis and is known as unsupervised learning. Machine learning can also be unsupervised and be used to learn and establish baseline behavioral profiles for various entities, tom M. be replaced with the question Can machines do what we can do. In the proposal he explores the characteristics that could be possessed by a thinking machine. Machine learning tasks are typically classified into three categories, depending on the nature of the learning signal or feedback available to a learning system. These are Supervised learning, The computer is presented with example inputs and their outputs, given by a teacher. Unsupervised learning, No labels are given to the learning algorithm, unsupervised learning can be a goal in itself or a means towards an end. Reinforcement learning, A computer program interacts with an environment in which it must perform a certain goal. The program is provided feedback in terms of rewards and punishments as it navigates its problem space, between supervised and unsupervised learning is semi-supervised learning, where the teacher gives an incomplete training signal, a training set with some of the target outputs missing. Transduction is a case of this principle where the entire set of problem instances is known at learning time. Among other categories of machine learning problems, learning to learn learns its own inductive bias based on previous experience and this is typically tackled in a supervised way. Spam filtering is an example of classification, where the inputs are email messages, in regression, also a supervised problem, the outputs are continuous rather than discrete. In clustering, a set of inputs is to be divided into groups, unlike in classification, the groups are not known beforehand, making this typically an unsupervised task. Density estimation finds the distribution of inputs in some space, dimensionality reduction simplifies inputs by mapping them into a lower-dimensional space. Topic modeling is a problem, where a program is given a list of human language documents and is tasked to find out which documents cover similar topics. As a scientific endeavour, machine learning grew out of the quest for artificial intelligence, already in the early days of AI as an academic discipline, some researchers were interested in having machines learn from data
Machine learning
–
Machine learning and data mining
11.
Computer science
–
Computer science is the study of the theory, experimentation, and engineering that form the basis for the design and use of computers. An alternate, more succinct definition of science is the study of automating algorithmic processes that scale. A computer scientist specializes in the theory of computation and the design of computational systems and its fields can be divided into a variety of theoretical and practical disciplines. Some fields, such as computational complexity theory, are highly abstract, other fields still focus on challenges in implementing computation. Human–computer interaction considers the challenges in making computers and computations useful, usable, the earliest foundations of what would become computer science predate the invention of the modern digital computer. Machines for calculating fixed numerical tasks such as the abacus have existed since antiquity, further, algorithms for performing computations have existed since antiquity, even before the development of sophisticated computing equipment. Wilhelm Schickard designed and constructed the first working mechanical calculator in 1623, in 1673, Gottfried Leibniz demonstrated a digital mechanical calculator, called the Stepped Reckoner. He may be considered the first computer scientist and information theorist, for, among other reasons and he started developing this machine in 1834, and in less than two years, he had sketched out many of the salient features of the modern computer. A crucial step was the adoption of a card system derived from the Jacquard loom making it infinitely programmable. Around 1885, Herman Hollerith invented the tabulator, which used punched cards to process statistical information, when the machine was finished, some hailed it as Babbages dream come true. During the 1940s, as new and more powerful computing machines were developed, as it became clear that computers could be used for more than just mathematical calculations, the field of computer science broadened to study computation in general. Computer science began to be established as an academic discipline in the 1950s. The worlds first computer science program, the Cambridge Diploma in Computer Science. The first computer science program in the United States was formed at Purdue University in 1962. Since practical computers became available, many applications of computing have become distinct areas of study in their own rights and it is the now well-known IBM brand that formed part of the computer science revolution during this time. IBM released the IBM704 and later the IBM709 computers, still, working with the IBM was frustrating if you had misplaced as much as one letter in one instruction, the program would crash, and you would have to start the whole process over again. During the late 1950s, the science discipline was very much in its developmental stages. Time has seen significant improvements in the usability and effectiveness of computing technology, modern society has seen a significant shift in the users of computer technology, from usage only by experts and professionals, to a near-ubiquitous user base
Computer science
–
Ada Lovelace is credited with writing the first algorithm intended for processing on a computer.
Computer science
Computer science
–
The German military used the Enigma machine (shown here) during World War II for communications they wanted kept secret. The large-scale decryption of Enigma traffic at Bletchley Park was an important factor that contributed to Allied victory in WWII.
Computer science
–
Digital logic
12.
Game theory
–
Game theory is the study of mathematical models of conflict and cooperation between intelligent rational decision-makers. Game theory is used in economics, political science, and psychology, as well as logic, computer science. Originally, it addressed zero-sum games, in one persons gains result in losses for the other participants. Today, game theory applies to a range of behavioral relations, and is now an umbrella term for the science of logical decision making in humans, animals. Modern game theory began with the idea regarding the existence of equilibria in two-person zero-sum games. Von Neumanns original proof used Brouwer fixed-point theorem on continuous mappings into compact convex sets and his paper was followed by the 1944 book Theory of Games and Economic Behavior, co-written with Oskar Morgenstern, which considered cooperative games of several players. The second edition of this provided an axiomatic theory of expected utility. This theory was developed extensively in the 1950s by many scholars, Game theory was later explicitly applied to biology in the 1970s, although similar developments go back at least as far as the 1930s. Game theory has been recognized as an important tool in many fields. With the Nobel Memorial Prize in Economic Sciences going to game theorist Jean Tirole in 2014, John Maynard Smith was awarded the Crafoord Prize for his application of game theory to biology. Early discussions of examples of two-person games occurred long before the rise of modern, the first known discussion of game theory occurred in a letter written by Charles Waldegrave, an active Jacobite, and uncle to James Waldegrave, a British diplomat, in 1713. In this letter, Waldegrave provides a mixed strategy solution to a two-person version of the card game le Her. James Madison made what we now recognize as an analysis of the ways states can be expected to behave under different systems of taxation. In 1913 Ernst Zermelo published Über eine Anwendung der Mengenlehre auf die Theorie des Schachspiels and it proved that the optimal chess strategy is strictly determined. This paved the way for more general theorems, the Danish mathematician Zeuthen proved that the mathematical model had a winning strategy by using Brouwers fixed point theorem. In his 1938 book Applications aux Jeux de Hasard and earlier notes, Borel conjectured that non-existence of mixed-strategy equilibria in two-person zero-sum games would occur, a conjecture that was proved false. Game theory did not really exist as a field until John von Neumann published a paper in 1928. Von Neumanns original proof used Brouwers fixed-point theorem on continuous mappings into compact convex sets and his paper was followed by his 1944 book Theory of Games and Economic Behavior co-authored with Oskar Morgenstern
Game theory
–
An extensive form game
13.
Complex systems
–
Complex systems present problems both in mathematical modelling and philosophical foundations. The subject is also called complex systems theory, complexity science, study of complex systems, complex networks, network science. Such a systems approach is used in computer science, biology, economics, physics, chemistry, architecture. A variety of abstract theoretical complex systems is studied as a field of mathematics, the key problems of complex systems are difficulties with their formal modelling and simulation. From such a perspective, in different research contexts complex systems are defined on the basis of their different attributes, since all complex systems have many interconnected components, the science of networks and network theory are important and useful tools for the study of complex systems. A theory for the resilience of system of systems represented by a network of interdependent networks was developed by Buldyrev et al, a consensus regarding a single universal definition of complex system does not yet exist. For systems that are less usefully represented with various other kinds of narratives. The study of complex system models is used for many scientific questions poorly suited to the traditional mechanistic conception provided by science. Linear systems represent the class of systems for which general techniques for stability control. However, many systems are inherently complex systems in terms of the definition above. This debate would notably lead economists, politicians and other parties to explore the question of computational complexity, gregory Bateson played a key role in establishing the connection between anthropology and systems theory, he recognized that the interactive parts of cultures function much like ecosystems. The first research institute focused on systems, the Santa Fe Institute, was founded in 1984. Today, there are over 50 institutes and research focusing on complex systems. The traditional approach to dealing with complexity is to reduce or constrain it, typically, this involves compartmentalisation, dividing a large system into separate parts. Organizations, for instance, divide their work into departments that deal with separate issues. Engineering systems are designed using modular components. However, modular designs become susceptible to failure when issues arise that bridge the divisions, as projects and acquisitions become increasingly complex, companies and governments are challenged to find effective ways to manage mega-acquisitions such as the Army Future Combat Systems. Acquisitions such as the FCS rely on a web of interrelated parts which interact unpredictably, over the last decades, within the emerging field of complexity economics new predictive tools have been developed to explain economic growth
Complex systems
–
Complex systems
Complex systems
–
A Braitenberg simulation, programmed in breve, an artificial life simulator
Complex systems
–
A complex adaptive system model
Complex systems
–
This is a schematic representation of three types of mathematical models of complex systems with the level of their mechanistic understanding.
14.
Probability interpretations
–
The word probability has been used in a variety of ways since it was first applied to the mathematical study of games of chance. Does probability measure the real, physical tendency of something to occur or is it a measure of how strongly one believes it will occur, in answering such questions, mathematicians interpret the probability values of probability theory. There are two categories of probability interpretations which can be called physical and evidential probabilities. Physical probabilities, which are also called objective or frequency probabilities, are associated with random physical systems such as wheels, rolling dice. In such systems, a type of event tends to occur at a persistent rate, or relative frequency. Physical probabilities either explain, or are invoked to explain, these stable frequencies, the two main kinds of theory of physical probability are frequentist accounts and propensity accounts. On most accounts, evidential probabilities are considered to be degrees of belief, the four main evidential interpretations are the classical interpretation, the subjective interpretation, the epistemic or inductive interpretation and the logical interpretation. There are also interpretations of probability covering groups, which are often labelled as intersubjective. Some interpretations of probability are associated with approaches to inference, including theories of estimation. The physical interpretation, for example, is taken by followers of frequentist statistical methods, such as Ronald Fisher, Jerzy Neyman and this article, however, focuses on the interpretations of probability rather than theories of statistical inference. The terminology of this topic is rather confusing, in part because probabilities are studied within a variety of academic fields, the word frequentist is especially tricky. To philosophers it refers to a theory of physical probability. To scientists, on the hand, frequentist probability is just another name for physical probability. Those who promote Bayesian inference view frequentist statistics as an approach to inference that recognises only physical probabilities. It is unanimously agreed that statistics depends somehow on probability, but, as to what probability is and how it is connected with statistics, there has seldom been such complete disagreement and breakdown of communication since the Tower of Babel. Doubtless, much of the disagreement is merely terminological and would disappear under sufficiently sharp analysis, the philosophy of probability presents problems chiefly in matters of epistemology and the uneasy interface between mathematical concepts and ordinary language as it is used by non-mathematicians. Probability theory is a field of study in mathematics. The first attempt at mathematical rigour in the field of probability, developed from studies of games of chance it states that probability is shared equally between all the possible outcomes, provided these outcomes can be deemed equally likely
Probability interpretations
–
The classical definition of probability works well for situations with only a finite number of equally-likely outcomes.
Probability interpretations
–
For frequentists, the probability of the ball landing in any pocket can be determined only by repeated trials in which the observed result converges to the underlying probability in the long run.
Probability interpretations
–
Gambling odds reflect the average bettor's 'degree of belief' in the outcome.
15.
Experiment
–
An experiment is a procedure carried out to support, refute, or validate a hypothesis. Experiments provide insight into cause-and-effect by demonstrating what outcome occurs when a particular factor is manipulated, experiments vary greatly in goal and scale, but always rely on repeatable procedure and logical analysis of the results. There also exists natural experimental studies, a child may carry out basic experiments to understand gravity, while teams of scientists may take years of systematic investigation to advance their understanding of a phenomenon. Experiments and other types of activities are very important to student learning in the science classroom. Experiments can raise test scores and help a student become more engaged and interested in the material they are learning, experiments can vary from personal and informal natural comparisons, to highly controlled. Uses of experiments vary considerably between the natural and human sciences, experiments typically include controls, which are designed to minimize the effects of variables other than the single independent variable. This increases the reliability of the results, often through a comparison between control measurements and the other measurements, scientific controls are a part of the scientific method. Ideally, all variables in an experiment are controlled and none are uncontrolled, in such an experiment, if all controls work as expected, it is possible to conclude that the experiment works as intended, and that results are due to the effect of the tested variable. In the scientific method, an experiment is a procedure that arbitrates between competing models or hypotheses. Researchers also use experimentation to test existing theories or new hypotheses to support or disprove them, an experiment usually tests a hypothesis, which is an expectation about how a particular process or phenomenon works. However, an experiment may also aim to answer a question, without a specific expectation about what the experiment reveals. If an experiment is conducted, the results usually either support or disprove the hypothesis. According to some philosophies of science, an experiment can never prove a hypothesis, on the other hand, an experiment that provides a counterexample can disprove a theory or hypothesis. An experiment must also control the possible confounding factors—any factors that would mar the accuracy or repeatability of the experiment or the ability to interpret the results, confounding is commonly eliminated through scientific controls and/or, in randomized experiments, through random assignment. In engineering and the sciences, experiments are a primary component of the scientific method. They are used to test theories and hypotheses about how physical processes work under particular conditions, typically, experiments in these fields focus on replication of identical procedures in hopes of producing identical results in each replication. In medicine and the sciences, the prevalence of experimental research varies widely across disciplines. In contrast to norms in the sciences, the focus is typically on the average treatment effect or another test statistic produced by the experiment
Experiment
–
Even very young children perform rudimentary experiments to learn about the world and how things work.
Experiment
–
Original map by John Snow showing the clusters of cholera cases in the London epidemic of 1854
16.
Frequentist probability
–
Frequentist probability or frequentism is an interpretation of probability, it defines an events probability as the limit of its relative frequency in a large number of trials. This interpretation supports the needs of experimental scientists and pollsters. It does not support all needs, gamblers typically require estimates of the odds without experiments, the development of the frequentist account was motivated by the problems and paradoxes of the previously dominant viewpoint, the classical interpretation. This classical interpretation stumbled at any statistical problem that has no natural symmetry for reasoning, in the frequentist interpretation, probabilities are discussed only when dealing with well-defined random experiments. The set of all outcomes of a random experiment is called the sample space of the experiment. An event is defined as a subset of the sample space to be considered. For any given event, only one of two possibilities may hold, it occurs or it does not, the relative frequency of occurrence of an event, observed in a number of repetitions of the experiment, is a measure of the probability of that event. This is the conception of probability in the frequentist interpretation. Clearly, as the number of trials is increased, one might expect the relative frequency to become an approximation of a true frequency. The frequentist interpretation is an approach to the definition and use of probabilities. It does not claim to capture all connotations of the concept probable in colloquial speech of natural languages and it offers distinct guidance in the construction and design of practical experiments, especially when contrasted with the Bayesian interpretation. As to whether this guidance is useful, or is apt to mis-interpretation, has been a source of controversy, particularly when the frequency interpretation of probability is mistakenly assumed to be the only possible basis for frequentist inference. So, for example, a list of mis-interpretations of the meaning of p-values accompanies the article on p-values, controversies are detailed in the article on statistical hypothesis testing. The Jeffreys–Lindley paradox shows how different interpretations, applied to the data set. As William Feller noted, There is no place in our system for speculations concerning the probability that the sun will rise tomorrow. Before speaking of it we should have to agree on an model which would run along the lines out of infinitely many worlds one is selected at random. Little imagination is required to construct such a model, but it appears both uninteresting and meaningless, fellers comment was criticism of Laplace, who published a solution to the sunrise problem using an alternative probability interpretation. Despite Laplaces explicit and immediate disclaimer in the source, based on expertise in astronomy as well as probability, soon thereafter a flurry of nearly simultaneous publications by Mill, Ellis, Cournot and Fries introduced the frequentist view
Frequentist probability
–
John Venn
17.
History of probability
–
While statistics deals with data and inferences from it, probability deals with the stochastic processes which lie behind data or outcomes. The mathematical sense of the term is from 1718, in the 18th century, the term chance was also used in the mathematical sense of probability. This word is ultimately from Latin cadentia, i. e. a fall, similarly, the derived noun likelihood had a meaning of similarity, resemblance but took on a meaning of probability from the mid 15th century. Ancient and medieval law of evidence developed a grading of degrees of proof, probabilities, presumptions, christiaan Huygens gave a comprehensive treatment of the subject. From Games, Gods and Gambling ISBN 978-0-85264-171-2 by F. N. David, In ancient times there were played using astragali. The Pottery of ancient Greece was evidence to show there was a circle drawn on the floor. In Egypt, excavators of tombs found a game they called Hounds and Jackals and it seems that this is the early stages of the creation of dice. First dice game mentioned in literature of the Christian era was called Hazard, thought to have been brought to Europe by the knights returning from the Crusades. A commentor of Dante puts further thought into this game, the thought was that with 3 dice, the lowest number you can get is 3, achieving a 4 can be done with 3 die by having a two on one die and aces on the other two dice. Cardano also thought about the throwing of three die,3 dice are thrown, there are the same number of ways to throw a 9 as there are a 10. From this, Cardano found that the probability of throwing a 9 is less than that of throwing a 10 and he also demonstrated the efficacy of defining odds as the ratio of favourable to unfavourable outcomes. In addition, the famous Galileo wrote about die-throwing sometime between 1613 and 1623, essentially thought about Cardanos problem, about the probability of throwing a 9 is less than throwing a 10. Galileo had the following to say, Certain numbers have the ability to be thrown because there are ways to create that number. Although 9 and 10 have the number of ways to be created,10 is considered by dice players to be more common than 9. Jacob Bernoullis Ars Conjectandi and Abraham de Moivres The Doctrine of Chances put probability on a sound footing, showing how to calculate a wide range of complex probabilities. The power of probabilistic methods in dealing with uncertainty was shown by Gausss determination of the orbit of Ceres from a few observations. The field of the history of probability itself was established by Isaac Todhunters monumental A History of the Mathematical Theory of Probability from the Time of Pascal to that of Laplace. A hypothesis, for example that a drug is usually effective, if observations approximately agree with the hypothesis, it is confirmed, if not, the hypothesis is rejected
History of probability
18.
Legal case
–
Legal Case was an Irish-bred British-trained Thoroughbred racehorse and sire. He was never as good again, but did win the Premio Roma in 1990, after his retirement from racing he had some success as a breeding stallion in Brazil. Legal Case was a bay horse with no white markings bred in Ireland by Ovidstown Investments Ltd and he was sired by the dual Prix de lArc de Triomphe winner Alleged out of the mare Maryinsky. Alleged was a stallion, and a strong influence for stamina, his best winners included Miss Alleged, Shantou, Law Society. Maryinsky won two races at Del Mar racetrack in 1980. Apart from Legal Case, Maryinsky also produced La Sky, the dam of the Epsom Oaks winner Love Divine who in turn produced the St Leger winner Sixties Icon. During his racing career Legal Case was owned by the businessman Sir Gordon White, Legal Case was unraced as a two-year-old and did not appear on a racecourse until June 1989, when he contested a maiden race over eight and a half furlongs at Beverley Racecourse. A month later he was ridden by Frankie Dettori when he started 2/9 for a race at Windsor Racecourse. Cochrane regained the ride when Legal Case was moved up in class for the Listed Winter Hill Stakes at Windsor in August in which he was matched against older horses for the first time. He started favourite but was three lengths by the Michael Stoute-trained colt Dolpour, with Opening Verse finishing fifth of the seven runners. In September Legal Case was moved up to Group Three class for the Select Stakes over ten furlongs at Goodwood Racecourse, ridden by Dettori he started the 7/4 favourite against four opponents. After being restrained in the early stages he took the lead a furlong out and drew away to win by four lengths from Greenwich Papillon with Indian Queen three lengths back in third place. The colt was moved up to the highest level when he was sent to France to contest the 68th running of the Prix de lArc de Triomphe over 2400 metres at Longchamp Racecourse on 8 October. Less than two weeks after his run at Longchamp, Legal Case, ridden by Cochrane, was one of eleven horses to contest the Champion Stakes over ten furlongs at Newmarket Racecourse. Dolpour was made favourite on 4/1 with Legal Case 5/1 second choice in the betting alongside the four-year-old Ile de Chypre, the other contenders included the Dewhurst Stakes winner Scenic, the improving handicapper Braashee, the Royal Lodge Stakes winner High Estate and Ile de Nisky. Ile de Chypre led from the start, with Legal Case being restrained towards the rear of the field before making progress in the last quarter mile on the stands side. Inside the final furlong the three-year-olds Dolpour, Legal Case and Scenic moved up to challenge Ile de Chypre, although Scenic was squeezed for room, the final strides saw Dolpour, Ile de Chypre and Legal Case racing neck-and-neck before crossing the line together. After a photo finish, Legal Case was declared the winner by a head from Dolpour, in 1990 Dettori took over from Cochrane as Cumanis stable jockey
Legal case
19.
Europe
–
Europe is a continent that comprises the westernmost part of Eurasia. Europe is bordered by the Arctic Ocean to the north, the Atlantic Ocean to the west, yet the non-oceanic borders of Europe—a concept dating back to classical antiquity—are arbitrary. Europe covers about 10,180,000 square kilometres, or 2% of the Earths surface, politically, Europe is divided into about fifty sovereign states of which the Russian Federation is the largest and most populous, spanning 39% of the continent and comprising 15% of its population. Europe had a population of about 740 million as of 2015. Further from the sea, seasonal differences are more noticeable than close to the coast, Europe, in particular ancient Greece, was the birthplace of Western civilization. The fall of the Western Roman Empire, during the period, marked the end of ancient history. Renaissance humanism, exploration, art, and science led to the modern era, from the Age of Discovery onwards, Europe played a predominant role in global affairs. Between the 16th and 20th centuries, European powers controlled at times the Americas, most of Africa, Oceania. The Industrial Revolution, which began in Great Britain at the end of the 18th century, gave rise to economic, cultural, and social change in Western Europe. During the Cold War, Europe was divided along the Iron Curtain between NATO in the west and the Warsaw Pact in the east, until the revolutions of 1989 and fall of the Berlin Wall. In 1955, the Council of Europe was formed following a speech by Sir Winston Churchill and it includes all states except for Belarus, Kazakhstan and Vatican City. Further European integration by some states led to the formation of the European Union, the EU originated in Western Europe but has been expanding eastward since the fall of the Soviet Union in 1991. The European Anthem is Ode to Joy and states celebrate peace, in classical Greek mythology, Europa is the name of either a Phoenician princess or of a queen of Crete. The name contains the elements εὐρύς, wide, broad and ὤψ eye, broad has been an epithet of Earth herself in the reconstructed Proto-Indo-European religion and the poetry devoted to it. For the second part also the divine attributes of grey-eyed Athena or ox-eyed Hera. The same naming motive according to cartographic convention appears in Greek Ανατολή, Martin Litchfield West stated that phonologically, the match between Europas name and any form of the Semitic word is very poor. Next to these there is also a Proto-Indo-European root *h1regʷos, meaning darkness. Most major world languages use words derived from Eurṓpē or Europa to refer to the continent, in some Turkic languages the originally Persian name Frangistan is used casually in referring to much of Europe, besides official names such as Avrupa or Evropa
Europe
–
Reconstruction of Herodotus ' world map
Europe
Europe
–
A medieval T and O map from 1472 showing the three continents as domains of the sons of Noah — Asia to Sem (Shem), Europe to Iafeth (Japheth), and Africa to Cham (Ham)
Europe
–
Early modern depiction of Europa regina ('Queen Europe') and the mythical Europa of the 8th century BC.
20.
Nobility
–
The privileges associated with nobility may constitute substantial advantages over or relative to non-nobles, or may be largely honorary, and vary from country to country and era to era. There is often a variety of ranks within the noble class. g, san Marino and the Vatican City in Europe. Hereditary titles often distinguish nobles from non-nobles, although in many nations most of the nobility have been un-titled, some countries have had non-hereditary nobility, such as the Empire of Brazil. The term derives from Latin nobilitas, the noun of the adjective nobilis. In modern usage, nobility is applied to the highest social class in pre-modern societies and it rapidly came to be seen as a hereditary caste, sometimes associated with a right to bear a hereditary title and, for example in pre-revolutionary France, enjoying fiscal and other privileges. Nobility is a historical, social and often legal notion, differing from high socio-economic status in that the latter is based on income. Being wealthy or influential cannot, ipso facto, make one noble, various republics, including former Iron Curtain countries, Greece, Mexico, and Austria have expressly abolished the conferral and use of titles of nobility for their citizens. Not all of the benefits of nobility derived from noble status per se, usually privileges were granted or recognised by the monarch in association with possession of a specific title, office or estate. Most nobles wealth derived from one or more estates, large or small and it also included infrastructure such as castle, well and mill to which local peasants were allowed some access, although often at a price. Nobles were expected to live nobly, that is, from the proceeds of these possessions, work involving manual labour or subordination to those of lower rank was either forbidden or frowned upon socially. In some countries, the lord could impose restrictions on such a commoners movements. Nobles exclusively enjoyed the privilege of hunting, in France, nobles were exempt from paying the taille, the major direct tax. In some parts of Europe the right of war long remained the privilege of every noble. During the early Renaissance, duelling established the status of a respectable gentleman, Nobility came to be associated with social rather than legal privilege, expressed in a general expectation of deference from those of lower rank. By the 21st century even that deference had become increasingly minimised, in France, a seigneurie might include one or more manors surrounded by land and villages subject to a nobles prerogatives and disposition. Seigneuries could be bought, sold or mortgaged, if erected by the crown into, e. g. a barony or countship, it became legally entailed for a specific family, which could use it as their title. Yet most French nobles were untitled, in other parts of Europe, sovereign rulers arrogated to themselves the exclusive prerogative to act as fons honorum within their realms. Nobility might be inherited or conferred by a fons honorum
Nobility
–
Detail from Très Riches Heures du Duc de Berry (The Very Rich Hours of the Duke of Berry), c. 1410, month of April
Nobility
–
Nobility offered protection in exchange for service
Nobility
–
French aristocrats, c. 1774
Nobility
–
A French political cartoon of the three orders of feudal society (1789). The rural third estate carries the clergy and the nobility.
21.
Pierre de Fermat
–
He made notable contributions to analytic geometry, probability, and optics. He is best known for his Fermats principle for light propagation and his Fermats Last Theorem in number theory, Fermat was born in the first decade of the 17th century in Beaumont-de-Lomagne, France—the late 15th-century mansion where Fermat was born is now a museum. He was from Gascony, where his father, Dominique Fermat, was a leather merchant. Pierre had one brother and two sisters and was almost certainly brought up in the town of his birth, there is little evidence concerning his school education, but it was probably at the Collège de Navarre in Montauban. He attended the University of Orléans from 1623 and received a bachelor in law in 1626. In Bordeaux he began his first serious mathematical researches, and in 1629 he gave a copy of his restoration of Apolloniuss De Locis Planis to one of the mathematicians there, there he became much influenced by the work of François Viète. In 1630, he bought the office of a councillor at the Parlement de Toulouse, one of the High Courts of Judicature in France and he held this office for the rest of his life. Fermat thereby became entitled to change his name from Pierre Fermat to Pierre de Fermat, fluent in six languages, Fermat was praised for his written verse in several languages and his advice was eagerly sought regarding the emendation of Greek texts. He communicated most of his work in letters to friends, often little or no proof of his theorems. In some of these letters to his friends he explored many of the ideas of calculus before Newton or Leibniz. Fermat was a trained lawyer making mathematics more of a hobby than a profession, nevertheless, he made important contributions to analytical geometry, probability, number theory and calculus. Secrecy was common in European mathematical circles at the time and this naturally led to priority disputes with contemporaries such as Descartes and Wallis. Anders Hald writes that, The basis of Fermats mathematics was the classical Greek treatises combined with Vietas new algebraic methods, Fermats pioneering work in analytic geometry was circulated in manuscript form in 1636, predating the publication of Descartes famous La géométrie. This manuscript was published posthumously in 1679 in Varia opera mathematica, in these works, Fermat obtained a technique for finding the centers of gravity of various plane and solid figures, which led to his further work in quadrature. Fermat was the first person known to have evaluated the integral of power functions. With his method, he was able to reduce this evaluation to the sum of geometric series, the resulting formula was helpful to Newton, and then Leibniz, when they independently developed the fundamental theorem of calculus. In number theory, Fermat studied Pells equation, perfect numbers, amicable numbers and it was while researching perfect numbers that he discovered Fermats little theorem. Fermat developed the two-square theorem, and the polygonal number theorem, although Fermat claimed to have proved all his arithmetic theorems, few records of his proofs have survived
Pierre de Fermat
–
Pierre de Fermat
Pierre de Fermat
–
Bust in the Salle des Illustres in Capitole de Toulouse
Pierre de Fermat
–
Place of burial of Pierre de Fermat in Place Jean Jaurés, Castres. Translation of the plaque: in this place was buried on January 13, 1665, Pierre de Fermat, councilor of the chamber of Edit [Parlement of Toulouse] and mathematician of great renown, celebrated for his theorem, a n + b n ≠ c n for n>2
Pierre de Fermat
–
Holographic will handwritten by Fermat on 4 March 1660 — kept at the Departmental Archives of Haute-Garonne, in Toulouse
22.
Blaise Pascal
–
Blaise Pascal was a French mathematician, physicist, inventor, writer and Christian philosopher. He was a prodigy who was educated by his father. Pascal also wrote in defence of the scientific method, in 1642, while still a teenager, he started some pioneering work on calculating machines. After three years of effort and 50 prototypes, he built 20 finished machines over the following 10 years, following Galileo Galilei and Torricelli, in 1647, he rebutted Aristotles followers who insisted that nature abhors a vacuum. Pascals results caused many disputes before being accepted, in 1646, he and his sister Jacqueline identified with the religious movement within Catholicism known by its detractors as Jansenism. Following a religious experience in late 1654, he began writing works on philosophy. His two most famous works date from this period, the Lettres provinciales and the Pensées, the set in the conflict between Jansenists and Jesuits. In that year, he wrote an important treatise on the arithmetical triangle. Between 1658 and 1659 he wrote on the cycloid and its use in calculating the volume of solids, Pascal had poor health, especially after the age of 18, and he died just two months after his 39th birthday. Pascal was born in Clermont-Ferrand, which is in Frances Auvergne region and he lost his mother, Antoinette Begon, at the age of three. His father, Étienne Pascal, who also had an interest in science and mathematics, was a local judge, Pascal had two sisters, the younger Jacqueline and the elder Gilberte. In 1631, five years after the death of his wife, the newly arrived family soon hired Louise Delfault, a maid who eventually became an instrumental member of the family. Étienne, who never remarried, decided that he alone would educate his children, for they all showed extraordinary intellectual ability, the young Pascal showed an amazing aptitude for mathematics and science. Particularly of interest to Pascal was a work of Desargues on conic sections and it states that if a hexagon is inscribed in a circle then the three intersection points of opposite sides lie on a line. Pascals work was so precocious that Descartes was convinced that Pascals father had written it, in France at that time offices and positions could be—and were—bought and sold. In 1631 Étienne sold his position as president of the Cour des Aides for 65,665 livres. The money was invested in a government bond which provided, if not a lavish, then certainly a comfortable income which allowed the Pascal family to move to, but in 1638 Richelieu, desperate for money to carry on the Thirty Years War, defaulted on the governments bonds. Suddenly Étienne Pascals worth had dropped from nearly 66,000 livres to less than 7,300 and it was only when Jacqueline performed well in a childrens play with Richelieu in attendance that Étienne was pardoned
Blaise Pascal
–
Painting of Blaise Pascal made by François II Quesnel for Gérard Edelinck in 1691.
Blaise Pascal
–
An early Pascaline on display at the Musée des Arts et Métiers, Paris
Blaise Pascal
–
Portrait of Pascal
Blaise Pascal
–
Pascal studying the cycloid, by Augustin Pajou, 1785, Louvre
23.
Jakob Bernoulli
–
Jacob Bernoulli was one of the many prominent mathematicians in the Bernoulli family. He was a proponent of Leibnizian calculus and had sided with Leibniz during the Leibniz–Newton calculus controversy. He is known for his numerous contributions to calculus, and along with his brother Johann, was one of the founders of the calculus of variations and he also discovered the fundamental mathematical constant e. However, his most important contribution was in the field of probability, Jacob Bernoulli was born in Basel, Switzerland. Following his fathers wish, he studied theology and entered the ministry, but contrary to the desires of his parents, he also studied mathematics and astronomy. He traveled throughout Europe from 1676 to 1682, learning about the latest discoveries in mathematics and this included the work of Johannes Hudde, Robert Boyle, and Robert Hooke. During this time he produced an incorrect theory of comets. Bernoulli returned to Switzerland and began teaching mechanics at the University in Basel from 1683, in 1684 he married Judith Stupanus, and they had two children. During this decade, he began a fertile research career. His travels allowed him to establish correspondence with many leading mathematicians and scientists of his era, during this time, he studied the new discoveries in mathematics, including Christiaan Huygenss De ratiociniis in aleae ludo, Descartes Geometrie and Frans van Schootens supplements of it. He also studied Isaac Barrow and John Wallis, leading to his interest in infinitesimal geometry, apart from these, it was between 1684 and 1689 that many of the results that were to make up Ars Conjectandi were discovered. He was appointed professor of mathematics at the University of Basel in 1687, by that time, he had begun tutoring his brother Johann Bernoulli on mathematical topics. The two brothers began to study the calculus as presented by Leibniz in his 1684 paper on the calculus in Nova Methodus pro Maximis et Minimis published in Acta Eruditorum. They also studied the publications of von Tschirnhaus and it must be understood that Leibnizs publications on the calculus were very obscure to mathematicians of that time and the Bernoullis were the first to try to understand and apply Leibnizs theories. Jacob collaborated with his brother on various applications of calculus, by 1697, the relationship had completely broken down. His grave is in Basel Munster or Cathedral where the gravestone shown below is located, the lunar crater Bernoulli is also named after him jointly with his brother Johann. Jacob Bernoullis first important contributions were a pamphlet on the parallels of logic and algebra published in 1685, work on probability in 1685 and his geometry result gave a construction to divide any triangle into four equal parts with two perpendicular lines. By 1689 he had published important work on series and published his law of large numbers in probability theory
Jakob Bernoulli
–
Jakob Bernoulli
Jakob Bernoulli
–
Jacob Bernoulli's grave.
24.
Abraham de Moivre
–
Abraham de Moivre was a French mathematician known for de Moivres formula, a formula that links complex numbers and trigonometry, and for his work on the normal distribution and probability theory. He was a friend of Isaac Newton, Edmond Halley, even though he faced religious persecution he remained a steadfast Christian throughout his life. Among his fellow Huguenot exiles in England, he was a colleague of the editor and translator Pierre des Maizeaux, De Moivre wrote a book on probability theory, The Doctrine of Chances, said to have been prized by gamblers. De Moivre first discovered Binets formula, the expression for Fibonacci numbers linking the nth power of the golden ratio φ to the nth Fibonacci number. He also was the first to postulate the central limit theorem, Abraham de Moivre was born in Vitry-le-François in Champagne on May 26,1667. His father, Daniel de Moivre, was a surgeon who believed in the value of education, though Abraham de Moivres parents were Protestant, he first attended Christian Brothers Catholic school in Vitry, which was unusually tolerant given religious tensions in France at the time. When he was eleven, his parents sent him to the Protestant Academy at Sedan, the Protestant Academy of Sedan had been founded in 1579 at the initiative of Françoise de Bourbon, the widow of Henri-Robert de la Marck. In 1682 the Protestant Academy at Sedan was suppressed, and de Moivre enrolled to study logic at Saumur for two years, in 1684, de Moivre moved to Paris to study physics, and for the first time had formal mathematics training with private lessons from Jacques Ozanam. It forbade Protestant worship and required all children be baptized by Catholic priests. De Moivre was sent to the Prieure de Saint-Martin, a school that the authorities sent Protestant children to for indoctrination into Catholicism, by the time he arrived in London, de Moivre was a competent mathematician with a good knowledge of many of the standard texts. To make a living, de Moivre became a tutor of mathematics. De Moivre continued his studies of mathematics after visiting the Earl of Devonshire and seeing Newtons recent book, looking through the book, he realized that it was far deeper than the books that he had studied previously, and he became determined to read and understand it. By 1692, de Moivre became friends with Edmond Halley and soon after with Isaac Newton himself, in 1695, Halley communicated de Moivres first mathematics paper, which arose from his study of fluxions in the Principia Mathematica, to the Royal Society. This paper was published in the Philosophical Transactions that same year, shortly after publishing this paper, de Moivre also generalized Newtons noteworthy binomial theorem into the multinomial theorem. The Royal Society became apprised of this method in 1697, after de Moivre had been accepted, Halley encouraged him to turn his attention to astronomy. The mathematician Johann Bernoulli proved this formula in 1710, at least a part of the reason was a bias against his French origins. In November 1697 he was elected a Fellow of the Royal Society and in 1712 was appointed to a set up by the society. Arbuthnot, Hill, Halley, Jones, Machin, Burnet, Robarts, Bonet, Aston, the full details of the controversy can be found in the Leibniz and Newton calculus controversy article
Abraham de Moivre
–
Abraham de Moivre
Abraham de Moivre
–
Doctrine of chances, 1761
25.
Daniel Bernoulli
–
Daniel Bernoulli FRS was a Swiss mathematician and physicist and was one of the many prominent mathematicians in the Bernoulli family. He is particularly remembered for his applications of mathematics to mechanics, especially fluid mechanics, Daniel Bernoulli was born in Groningen, in the Netherlands, into a family of distinguished mathematicians. The Bernoulli family came originally from Antwerp, at time in the Spanish Netherlands. After a brief period in Frankfurt the family moved to Basel, Daniel was the son of Johann Bernoulli, nephew of Jacob Bernoulli. He had two brothers, Niklaus and Johann II, Daniel Bernoulli was described by W. W. Rouse Ball as by far the ablest of the younger Bernoullis. He is said to have had a bad relationship with his father, Johann Bernoulli also plagiarized some key ideas from Daniels book Hydrodynamica in his own book Hydraulica which he backdated to before Hydrodynamica. Despite Daniels attempts at reconciliation, his father carried the grudge until his death, around schooling age, his father, Johann, encouraged him to study business, there being poor rewards awaiting a mathematician. However, Daniel refused, because he wanted to study mathematics and he later gave in to his fathers wish and studied business. His father then asked him to study in medicine, and Daniel agreed under the condition that his father would teach him mathematics privately, Daniel studied medicine at Basel, Heidelberg, and Strasbourg, and earned a PhD in anatomy and botany in 1721. He was a contemporary and close friend of Leonhard Euler and he went to St. Petersburg in 1724 as professor of mathematics, but was very unhappy there, and a temporary illness in 1733 gave him an excuse for leaving St. Petersberg. He returned to the University of Basel, where he held the chairs of medicine, metaphysics. In May,1750 he was elected a Fellow of the Royal Society and his earliest mathematical work was the Exercitationes, published in 1724 with the help of Goldbach. Two years later he pointed out for the first time the frequent desirability of resolving a compound motion into motions of translation and motion of rotation, together Bernoulli and Euler tried to discover more about the flow of fluids. In particular, they wanted to know about the relationship between the speed at which blood flows and its pressure, soon physicians all over Europe were measuring patients blood pressure by sticking point-ended glass tubes directly into their arteries. It was not until about 170 years later, in 1896 that an Italian doctor discovered a less painful method which is still in use today. However, Bernoullis method of measuring pressure is used today in modern aircraft to measure the speed of the air passing the plane. Taking his discoveries further, Daniel Bernoulli now returned to his work on Conservation of Energy. It was known that a moving body exchanges its kinetic energy for energy when it gains height
Daniel Bernoulli
–
Daniel Bernoulli
26.
Adrien-Marie Legendre
–
Adrien-Marie Legendre was a French mathematician. Legendre made numerous contributions to mathematics, well-known and important concepts such as the Legendre polynomials and Legendre transformation are named after him. Adrien-Marie Legendre was born in Paris on 18 September 1752 to a wealthy family and he received his education at the Collège Mazarin in Paris, and defended his thesis in physics and mathematics in 1770. He taught at the École Militaire in Paris from 1775 to 1780, at the same time, he was associated with the Bureau des Longitudes. In 1782, the Berlin Academy awarded Legendre a prize for his treatise on projectiles in resistant media and this treatise also brought him to the attention of Lagrange. The Académie des Sciences made Legendre an adjoint member in 1783, in 1789 he was elected a Fellow of the Royal Society. He assisted with the Anglo-French Survey to calculate the distance between the Paris Observatory and the Royal Greenwich Observatory by means of trigonometry. To this end in 1787 he visited Dover and London together with Dominique, comte de Cassini, the three also visited William Herschel, the discoverer of the planet Uranus. Legendre lost his fortune in 1793 during the French Revolution. That year, he also married Marguerite-Claudine Couhin, who helped him put his affairs in order, in 1795 Legendre became one of six members of the mathematics section of the reconstituted Académie des Sciences, renamed the Institut National des Sciences et des Arts. Later, in 1803, Napoleon reorganized the Institut National, and his pension was partially reinstated with the change in government in 1828. In 1831 he was made an officer of the Légion dHonneur, Legendre died in Paris on 10 January 1833, after a long and painful illness, and Legendres widow carefully preserved his belongings to memorialize him. Upon her death in 1856, she was buried next to her husband in the village of Auteuil, where the couple had lived, Legendres name is one of the 72 names inscribed on the Eiffel Tower. Today, the term least squares method is used as a translation from the French méthode des moindres carrés. Around 1811 he named the gamma function and introduced the symbol Γ normalizing it to Γ = n, in 1830 he gave a proof of Fermats last theorem for exponent n =5, which was also proven by Lejeune Dirichlet in 1828. In number theory, he conjectured the quadratic reciprocity law, subsequently proved by Gauss, in connection to this and he also did pioneering work on the distribution of primes, and on the application of analysis to number theory. His 1798 conjecture of the prime number theorem was proved by Hadamard. He is known for the Legendre transformation, which is used to go from the Lagrangian to the Hamiltonian formulation of classical mechanics, in thermodynamics it is also used to obtain the enthalpy and the Helmholtz and Gibbs energies from the internal energy
Adrien-Marie Legendre
–
1820 watercolor caricature of Adrien-Marie Legendre by French artist Julien-Leopold Boilly (see portrait debacle), the only existing portrait known
Adrien-Marie Legendre
–
1820 watercolor caricatures of the French mathematicians Adrien-Marie Legendre (left) and Joseph Fourier (right) by French artist Julien-Leopold Boilly, watercolor portrait numbers 29 and 30 of Album de 73 portraits-charge aquarellés des membres de I’Institut.
Adrien-Marie Legendre
–
Side view sketching of French politician Louis Legendre (1752–1797), whose portrait has been mistakenly used, for nearly 200 years, to represent French mathematician Adrien-Marie Legendre, i.e. up until 2005 when the mistake was discovered.
27.
Robert Adrain
–
Robert Adrain was an Irish mathematician, whose career was spent in the USA. He was considered one of the most brilliant mathematical minds of the time in America and he is chiefly remembered for his formulation of the method of least squares. He was born in Carrickfergus, County Antrim, Ireland, but left Ireland after being wounded in the uprising of the United Irishmen in 1798 and moved to Princeton. He taught mathematics at various schools in the United States and he was president of the York County Academy in York, Pennsylvania, from 1801 to 1805. He is chiefly remembered for his formulation of the method of least squares, Adrain certainly did not know of the work of C. F. Gauss on least squares, although it is possible that he had read A. M, Adrain was an editor of and contributor to the Mathematical Correspondent, the first mathematical journal in the United States. He was elected a Fellow of the American Academy of Arts, in 1825 he founded a somewhat more successful publication targeting a wider readership, The Mathematical Diary, which was published through 1832. Adrain was the father of Congressman Garnett B, Robert Adrain died in New Brunswick, New Jersey. He is commemorated by a plaque, unveiled at Carrickfergus by the Ulster History Circle. Attribution This article incorporates text from a now in the public domain, Adrain. Dublin, M. H. Gill & son, research concerning the probabilities of the errors which happen in making observations, &c. Vol. I, Article XIV, pp 93–109, philadelphia, William P. Farrand and Co.1808. Enseignements et éditions, de Robert Adrain à la genèse nationale d’une discipline, », université de Nantes, Centre François Viète. Mathematical statistics in the early States
Robert Adrain
–
Robert Adrain
28.
Friedrich Bessel
–
Friedrich Wilhelm Bessel was a German astronomer, mathematician, physicist and geodesist. He was the first astronomer who determined reliable values for the distance from the sun to another star by the method of parallax, a special type of mathematical functions were named Bessel functions after Bessels death, though they had originally been discovered by Daniel Bernoulli. Bessel was born in Minden, Westphalia, administrative center of Minden-Ravensberg and he was born into a large family in Germany. At the age of 14 Bessel was apprenticed to the import-export concern Kulenkamp at Bremen, the businesss reliance on cargo ships led him to turn his mathematical skills to problems in navigation. This in turn led to an interest in astronomy as a way of determining longitude, two years later Bessel left Kulenkamp and became Johann Hieronymus Schröters assistant at Lilienthal Observatory near Bremen. There he worked on James Bradleys stellar observations to produce precise positions for some 3,222 stars, in January 1810, at the age of 25, Bessel was appointed director of the newly founded Königsberg Observatory by King Frederick William III of Prussia. On the recommendation of fellow mathematician and physicist Carl Friedrich Gauss he was awarded a doctor degree from the University of Göttingen in March 1811. Around that time, the two men engaged in an epistolary correspondence, however, when they met in person in 1825, they quarrelled, the details are not known. The physicist Franz Ernst Neumann, Bessels close companion and colleague, was married to Johanna Hagens sister Florentine, Neumann introduced Bessels exacting methods of measurement and data reduction into his mathematico-physical seminar, which he co-directed with Carl Gustav Jacob Jacobi at Königsberg. These exacting methods had an impact upon the work of Neumanns students. Bessel had two sons and three daughters and his eldest daughter, Marie, married Georg Adolf Erman, member of the scholar family Erman. One of their sons was the renowned Egyptologist Adolf Erman, after several months of illness Bessel died in March 1846 at his observatory from retroperitoneal fibrosis. While the observatory was still in construction Bessel elaborated the Fundamenta Astronomiae based on Bradleys observations, as a preliminary result he produced tables of atmospheric refraction that won him the Lalande Prize from the French Academy of Sciences in 1811. The Königsberg Observatory began operation in 1813, starting in 1819, Bessel determined the position of over 50,000 stars using a meridian circle from Reichenbach, assisted by some of his qualified students. The most prominent of them was Friedrich Wilhelm Argelander, with this work done, Bessel was able to achieve the feat for which he is best remembered today, he is credited with being the first to use parallax in calculating the distance to a star. In 1838 Bessel won the race, announcing that 61 Cygni had a parallax of 0.314 arcseconds, given the current measurement of 11.4 ly, Bessels figure had an error of 9. 6%. Nearly at the same time Friedrich Georg Wilhelm Struve and Thomas Henderson measured the parallaxes of Vega and his announcement of Siriuss dark companion in 1844 was the first correct claim of a previously unobserved companion by positional measurement, and eventually led to the discovery of Sirius B. In 1824, Bessel developed a new method for calculation the circumstances of eclipses using the so-called Besselian elements and his method simplified the calculation to such an extent, without sacrificing accuracy, that it is still in use today
Friedrich Bessel
–
C. A. Jensen, Friedrich Wilhelm Bessel, 1839 (Ny Carlsberg Glyptotek)
29.
Giovanni Schiaparelli
–
Giovanni Virginio Schiaparelli was an Italian astronomer and science historian. He was educated at the University of Turin, and later studied at Berlin Observatory, in 1859–1860 he worked in Pulkovo Observatory near St Petersburg, and then worked for over forty years at Brera Observatory in Milan. Among Schiaparellis contributions are his telescopic observations of Mars, in his initial observations, he named the seas and continents of Mars. While the term indicates an artificial construction, the term channels connotes that the observed features were natural configurations of the planetary surface. Later, with thanks to the observations of the Italian astronomer Vincenzo Cerulli. I have already pointed out that, in the absence of rain on Mars and he proved, for example, that the orbit of the Leonid meteor shower coincided with that of the comet Tempel-Tuttle. These observations led the astronomer to formulate the hypothesis, subsequently proved to be correct and he was also a keen observer of the inner planets Mercury and Venus. He made several drawings and determined their rotation periods, in 1965, it was shown that his and most other subsequent measurements of Mercurys period were incorrect. Schiaparelli was a scholar of the history of classical astronomy, lalande Prize Gold Medal of the Royal Astronomical Society Bruce Medal The main-belt asteroid 4062 Schiaparelli, named on 15 September 1989. The lunar crater Schiaparelli The Martian crater Schiaparelli Schiaparelli Dorsum on Mercury The 2016 ExoMars Schiaparelli lander and his niece, Elsa Schiaparelli, became a noted designer or maker of haute couture. 1873 – Le stelle cadenti 1893 – La vita sul pianeta Marte 1925 – Scritti sulla storia della astronomia antica in three volumes, Schiaparelli, Giovanni Virginio, biography from www. daviddarling. info. Obituaries, G. V. Schiaparelli, J. G. Galle, J. B. N. Hennessey J. Coles, J. E. Gore, The Observatory, Vol.33, p. 311–318, August 1910 Source texts from Wikisource in Italian and English. Le Mani su Marte, I diari di G. V, Works by Giovanni Virginio Schiaparelli at Project Gutenberg Works by or about Giovanni Schiaparelli at Internet Archive AN185 193/194 ApJ32313 MNRAS71282 PASP22164
Giovanni Schiaparelli
–
Giovanni Schiaparelli
Giovanni Schiaparelli
–
Schiaparelli's grave at the Monumental Cemetery of Milan, Italy
30.
Christian August Friedrich Peters
–
Christian August Friedrich Peters was a German astronomer. He was the father of astronomer Carl Friedrich Wilhelm Peters and he was born in Hamburg and died in Kiel. Peters was the son of a merchant and, although he did not attend school regularly, he obtained a good knowledge of mathematics. In 1826 he became assistant to Heinrich Christian Schumacher at Altona Observatory, Schumacher encouraged him to study astronomy and Peters did a PhD under Friedrich Bessel at the University of Königsberg. In 1834 he became an assistant at Hamburg Observatory and in 1839 joined the staff of Pulkovo Observatory, in 1849 he became professor of astronomy at Königsberg and soon after succeeded Friedrich Wilhelm Bessel as director of the observatory there. In 1854 he became director of Altona Observatory and editor of the Astronomische Nachrichten, Peters edited the journal for the rest of his life, being responsible for 58 volumes of the journal. In 1872 the observatory moved to Kiel and he moved there, in 1866, he was elected a foreign member of the Royal Swedish Academy of Sciences. Peters became a name in the literature on the theory of errors for his 1856 note on the estimation of precision using absolute deviations from the mean, Peters won the Gold Medal of the Royal Astronomical Society in 1852. Numerus constans nutationis ex ascensionibus rectis stellae polaris in specula Dorpatensi annis 1822 ad 1838 observatis deductus, resultate aus Beobachtungen des Polarsterns am Ertelschen Vertikalkreise. 1842 Recherches sur la parallaxe des étoiles fixes, Über die eigene Bewegung des Sirius. Diese Schrift führte zur Entdeckung des Sirius-Begleiters, Über die Bestimmung des wahrscheinlichen Fehlers einer Beobachtung aus den Abweichungen der Beobachtungen von ihrem arithmetischen Mittel, Astronomische Nachrichten,44. The articles Peters published in Astronomische Nachrichten are all available on-line, freiesleben Peters, Christian August Friedrich Dictionary of Scientific Biography, vol
Christian August Friedrich Peters
–
Peters Christian August Friedrich
31.
Sylvestre Lacroix
–
Sylvestre François Lacroix was a French mathematician. He was born in Paris, and was raised in a family who still managed to obtain a good education for their son. Lacroixs path to mathematics started with the novel Robinson Crusoe and that gave him an interest in sailing and thus navigation too. At that point geometry captured his interest and the rest of mathematics followed and he had courses with Antoine-René Mauduit at College Royale de France and Joseph-Francois Marie at Collége Mazaine of University of Paris. In 1779 he obtained some lunar observations of Pierre Charles Le Monnier, the next year he followed some lectures of Gaspard Monge. In 1782 at the age of 17 he became an instructor in mathematics at the École de Gardes de la Marine in Rochefort, Monge was the students examiner and Lacroixs supervisor there until 1795. Returning to Paris, Condorcet hired Lacroix to fill in for him as instructor of gentlemen at a Paris lycée, in 1787 he began to teach at École Royale Militaire de Paris and he married Marie Nicole Sophie Arcambal. In Besançon, from 1788, he taught courses at the École Royale dArtillerie under examiner Pierre-Simon Laplace, the posting in Besançon lasted until 1793 when Lacroix returned to Paris. It was the best of times and the worst of times, Lavoisier had opened inquiry into new chemistry and he also joined Societe Philomatique de Paris which provided a journal in which to communicate his findings. On the other hand, Paris was in the grip of the Reign of Terror, in 1794 Lacroix became director of the Executive Committee for Public Instruction. In this position he promoted École Normale and the system of Écoles Centrales, in 1795 he taught at École Central des Quatres-Nations. The first volume Traité du Calcul Différentiel et du Calcul Intégral was published in 1797, legendre predicted that it will make itself conspicuous by the choice of methods, their generality, and the rigor of the demonstrations. In hindsight Ivor Grattan-Guinness observed, The Traite is by far the most comprehensive work of its kind for that time, in 1799, he became professor of analysis at École Polytechnique. Lacroix was the author of at least 17 biographies contributed to Biographie Universalle compiled by Louis Gabriel Michaud, in 1809, he was admitted to Faculté des Sciences de Paris. In 1812, he began teaching at the Collège de France, in addition, the structure of the work was changed somewhat, especially the third volume on series and differences. But the general impression is still that the streams and directions of the calculus had been amplified and enriched. During his career, he produced a number of important textbooks in mathematics, translations of these books into the English language were used in British universities, and the books remained in circulation for nearly 50 years. In 1812, Babbage set up The Analytical Society for the translation of Differential and Integral Calculus, Lacroix crater on the Moon was named for him
Sylvestre Lacroix
–
Sylvestre François Lacroix
32.
Adolphe Quetelet
–
Lambert Adolphe Jacques Quetelet ForMemRS was a Belgian astronomer, mathematician, statistician and sociologist. He founded and directed the Brussels Observatory and was influential in introducing statistical methods to the social sciences and his name is sometimes spelled with an accent as Quételet. He developed the body mass index scale, Adolphe was born in Ghent, the son of François-Augustin-Jacques-Henri Quetelet, a Frenchman and Anne Françoise Vandervelde, a Flemish woman. His father, François, was born at Ham, Picardy, in that capacity, he traveled with his employer on the Continent, particularly spending time in Italy. At about 31, he settled in Ghent and was employed by the city, francois died when Adolphe was only seven years old. Adolphe studied at the Ghent lycée, where he started teaching mathematics in 1815 at the age of 19, in 1819 he moved to the Athenaeum in Brussels and in the same year he completed his dissertation. Quetelet received a doctorate in mathematics in 1819 from the University of Ghent, shortly thereafter, the young man set out to convince government officials and private donors to build an astronomical observatory in Brussels, he succeeded in 1828. He became a member of the Royal Academy in 1820 and he lectured at the museum for sciences and letters and at the Belgian Military School. In 1825 he became correspondent of the Royal Institute of the Netherlands, from 1841 to 1851 he was supernumerair associate in the Institute, and when it became Royal Netherlands Academy of Arts and Sciences he became foreign member. In 1850, he was elected a member of the Royal Swedish Academy of Sciences. Quetelet also founded several statistical journals and societies, and was interested in creating international cooperation among statisticians. In 1855 Quetelet suffered from apoplexy, which diminished but did not end his scientific activity and he died in Brussels on 17 February 1874, and is buried in the Brussels Cemetery. His scientific research encompassed a range of different scientific disciplines, meteorology, astronomy, mathematics, statistics, demography, sociology, criminology. He made significant contributions to development, but he also wrote several monographs directed to the general public. Quetelet was a liberal and an anticlerical, but not an atheist or materialist nor a socialist, the new science of probability and statistics was mainly used in astronomy at the time, where it was essential to account for measurement errors around means. This was done using the method of least squares, Quetelet was among the first to apply statistics to social science, planning what he called social physics. He was keenly aware of the complexity of social phenomena. His goal was to understand the statistical laws underlying such phenomena as crime rates and he wanted to explain the values of these variables by other social factors
Adolphe Quetelet
–
Adolphe Quetelet
33.
Richard Dedekind
–
Julius Wilhelm Richard Dedekind was a German mathematician who made important contributions to abstract algebra, algebraic number theory and the definition of the real numbers. Dedekinds father was Julius Levin Ulrich Dedekind, an administrator of Collegium Carolinum in Braunschweig, as an adult, he never used the names Julius Wilhelm. He was born, lived most of his life, and died in Braunschweig and he first attended the Collegium Carolinum in 1848 before transferring to the University of Göttingen in 1850. There, Dedekind was taught number theory by professor Moritz Stern, Gauss was still teaching, although mostly at an elementary level, and Dedekind became his last student. Dedekind received his doctorate in 1852, for a thesis titled Über die Theorie der Eulerschen Integrale and this thesis did not display the talent evident by Dedekinds subsequent publications. At that time, the University of Berlin, not Göttingen, was the facility for mathematical research in Germany. Thus Dedekind went to Berlin for two years of study, where he and Bernhard Riemann were contemporaries, they were awarded the habilitation in 1854. Dedekind returned to Göttingen to teach as a Privatdozent, giving courses on probability and he studied for a while with Peter Gustav Lejeune Dirichlet, and they became good friends. Because of lingering weaknesses in his knowledge, he studied elliptic. Yet he was also the first at Göttingen to lecture concerning Galois theory, about this time, he became one of the first people to understand the importance of the notion of groups for algebra and arithmetic. In 1858, he began teaching at the Polytechnic school in Zürich, when the Collegium Carolinum was upgraded to a Technische Hochschule in 1862, Dedekind returned to his native Braunschweig, where he spent the rest of his life, teaching at the Institute. He retired in 1894, but did occasional teaching and continued to publish and he never married, instead living with his sister Julia. Dedekind was elected to the Academies of Berlin and Rome, and he received honorary doctorates from the universities of Oslo, Zurich, and Braunschweig. While teaching calculus for the first time at the Polytechnic school, Dedekind developed the now known as a Dedekind cut. The idea of a cut is that an irrational number divides the rational numbers into two classes, with all the numbers of one class being strictly greater than all the numbers of the other class. Every location on the number line continuum contains either a rational or an irrational number, thus there are no empty locations, gaps, or discontinuities. Dedekind published his thoughts on irrational numbers and Dedekind cuts in his pamphlet Stetigkeit und irrationale Zahlen, in modern terminology, Vollständigkeit, Dedekinds theorem states that if there existed a one-to-one correspondence between two sets, then Dedekind said that the two sets were similar. Thus the set N of natural numbers can be shown to be similar to the subset of N whose members are the squares of every member of N, N12345678910
Richard Dedekind
–
Richard Dedekind
Richard Dedekind
–
East German stamp from 1981, commemorating Richard Dedekind.
34.
Hermann Laurent
–
Paul Matthieu Hermann Laurent was a French mathematician. Despite his large body of works, Laurent series expansions for complex functions were not named after him, sur les principes fondamentaux de la théorie des nombres et de la géométrie
Hermann Laurent
–
Paul Matthieu Hermann Laurent
35.
Andrey Markov
–
Andrey Andreyevich Markov was a Russian mathematician. He is best known for his work on stochastic processes, a primary subject of his research later became known as Markov chains and Markov processes. Markov and his younger brother Vladimir Andreevich Markov proved Markov brothers inequality and his son, another Andrei Andreevich Markov, was also a notable mathematician, making contributions to constructive mathematics and recursive function theory. Andrey Markov was born on 14 June 1856 in Russia and he attended Petersburg Grammar, where he was seen as a rebellious student by a select few teachers. In his academics he performed poorly in most subjects other than mathematics, later in life he attended Petersburg University and was lectured by Pafnuty Chebyshev. Among his teachers were Yulian Sokhotski, Konstantin Posse, Yegor Zolotarev, Pafnuty Chebyshev, Aleksandr Korkin, Mikhail Okatov, Osip Somov and he completed his studies at the University and was later asked if he would like to stay and have a career as a Mathematician. He later taught at schools and continued his own mathematical studies. In this time he found a use for his mathematical skills. He figured out that he could use chains to model the alliteration of vowels and he also contributed to many other mathematical aspects in his time. He died at age 66 on 20 July 1922, during the following year, he passed the candidates examinations, and he remained at the university to prepare for a lecturers position. In April 1880, Markov defended his masters thesis About Binary Quadratic Forms with Positive Determinant, five years later, in January 1885, there followed his doctoral thesis About Some Applications of Algebraic Continuous Fractions. His pedagogical work began after the defense of his masters thesis in autumn 1880, as a privatdozent he lectured on differential and integral calculus. Later he lectured alternately on introduction to analysis, probability theory, from 1895 through 1905 he also lectured in differential calculus. One year after the defense of his thesis, Markov was appointed extraordinary professor. In 1890, after the death of Viktor Bunyakovsky, Markov became a member of the academy. His promotion to a professor of St. Petersburg University followed in the fall of 1894. In 1896, Markov was elected a member of the academy as the successor of Chebyshev. In 1905, he was appointed merited professor and was granted the right to retire, until 1910, however, he continued to lecture in the calculus of differences
Andrey Markov
–
Andrey (Andrei) Andreyevich Markov
Andrey Markov
–
Markov in 1886
Andrey Markov
–
Markov's headstone
36.
Stochastic process
–
In probability theory and related fields, a stochastic or random process is a mathematical object usually defined as a collection of random variables. Stochastic processes are used as mathematical models of systems and phenomena that appear to vary in a random manner. Furthermore, seemingly random changes in financial markets have motivated the use of stochastic processes in finance. Applications and the study of phenomena have in turn inspired the proposal of new stochastic processes. Examples of such processes include the Wiener process or Brownian motion process, used by Louis Bachelier to study price changes on the Paris Bourse. Erlang to study the number phone calls occurring in a period of time. The term random function is used to refer to a stochastic or random process. The terms stochastic process and random process are used interchangeably, often no specific mathematical space for the set that indexes the random variables. But often these two terms are used when the variables are indexed by the integers or an interval of the real line. If the random variables are indexed by the Cartesian plane or some higher-dimensional Euclidean space, the values of a stochastic process are not always numbers and can be vectors or other mathematical objects. The theory of processes is considered to be an important contribution to mathematics. The set used to index the random variables is called the index set, historically, the index set was some subset of the real line, such as the natural numbers, giving the index set the interpretation of time. Each random variable in the collection takes values from the same space known as the state space. This state space can be, for example, the integers, an increment is the amount that a stochastic process changes between two index values, often interpreted as two points in time. A stochastic process can have many outcomes, due its randomness, and an outcome of a stochastic process is called, among other names. A stochastic process can be classified in different ways, for example, by its space, its index set. One common way of classification is by the cardinality of the index set, if the index set is some interval of the real line, then time is said to be continuous. The two types of processes are respectively referred to as discrete-time and continuous-time stochastic processes
Stochastic process
–
Stock market fluctuations have been modeled by stochastic processes.
37.
Andrey Kolmogorov
–
Andrey Kolmogorov was born in Tambov, about 500 kilometers south-southeast of Moscow, in 1903. Kolmogorova, died giving birth to him, Andrey was raised by two of his aunts in Tunoshna at the estate of his grandfather, a well-to-do nobleman. Little is known about Andreys father and he was supposedly named Nikolai Matveevich Kataev and had been an agronomist. Nikolai had been exiled from St. Petersburg to the Yaroslavl province after his participation in the movement against the czars. He disappeared in 1919 and he was presumed to have killed in the Russian Civil War. Andrey Kolmogorov was educated in his aunt Veras village school, and his earliest literary efforts, Andrey was the editor of the mathematical section of this journal. In 1910, his aunt adopted him, and they moved to Moscow, later that same year, Kolmogorov began to study at the Moscow State University and at the same time Mendeleev Moscow Institute of Chemistry and Technology. Kolmogorov writes about this time, I arrived at Moscow University with a knowledge of mathematics. I knew in particular the beginning of set theory, I studied many questions in articles in the Encyclopedia of Brockhaus and Efron, filling out for myself what was presented too concisely in these articles. Kolmogorov gained a reputation for his wide-ranging erudition, during the same period, Kolmogorov worked out and proved several results in set theory and in the theory of Fourier series. In 1922, Kolmogorov gained international recognition for constructing a Fourier series that diverges almost everywhere, around this time, he decided to devote his life to mathematics. In 1925, Kolmogorov graduated from the Moscow State University and began to study under the supervision of Nikolai Luzin, Kolmogorov became interested in probability theory. In 1929, Kolmogorov earned his Doctor of Philosophy degree, from Moscow State University, in 1930, Kolmogorov went on his first long trip abroad, traveling to Göttingen and Munich, and then to Paris. He had various contacts in Göttingen. His pioneering work, About the Analytical Methods of Probability Theory, was published in 1931, also in 1931, he became a professor at the Moscow State University. In 1935, Kolmogorov became the first chairman of the department of probability theory at the Moscow State University, around the same years Kolmogorov contributed to the field of ecology and generalized the Lotka–Volterra model of predator-prey systems. In 1936, Kolmogorov and Alexandrov were involved in the persecution of their common teacher Nikolai Luzin, in the so-called Luzin affair. In a 1938 paper, Kolmogorov established the basic theorems for smoothing and predicting stationary stochastic processes—a paper that had military applications during the Cold War
Andrey Kolmogorov
–
Andrey Kolmogorov
Andrey Kolmogorov
–
Kolmogorov (left) delivers a talk at a Soviet information theory symposium. (Tallinn, 1973).
Andrey Kolmogorov
–
Kolmogorov works on his talk (Tallinn, 1973).
38.
Artemas Martin
–
Artemas Martin was a self-educated American mathematician. Martin was born on August 3,1835 in Steuben County, New York and grew up in Venango County and he worked as a farmer, oil driller, and schoolteacher. In 1881, he declined an invitation to become a professor of mathematics at the Normal School in Missouri, in 1885, he became the librarian for the Survey Office of the United States Coast Guard, and in 1898 he became a computer in the Division of Tides. He died on November 7,1918, from 1870 to 1875, he was editor of the Stairway Department of Clarks School Visitor, one of the magazines to which he had previously contributed. From 1875 to 1876 Martin moved to the Normal Monthly, where he published 16 articles on diophantine analysis and he subsequently became editor of the Mathematical Visitor in 1877 and of the Mathematical Magazine in 1882. In 1983 in Chicago, his paper On fifth-power numbers whose sum is a power was read at the International Mathematical Congress held in connection with the Worlds Columbian Exposition. Martin maintained an extensive library, now in the collections of American University. In 1877 Martin was given an honorary M. A. from Yale University. In 1882 he was awarded honorary degree, a Ph. D. from Rutgers University, and his third honorary degree. He was also a member of the American Mathematical Society, the Circolo Matematico di Palermo, the Mathematical Association of England, and the Deutsche Mathematiker-Vereinigung
Artemas Martin
–
Artemas Martin (US Naval Observatory)
39.
Kolmogorov
–
Andrey Kolmogorov was born in Tambov, about 500 kilometers south-southeast of Moscow, in 1903. Kolmogorova, died giving birth to him, Andrey was raised by two of his aunts in Tunoshna at the estate of his grandfather, a well-to-do nobleman. Little is known about Andreys father and he was supposedly named Nikolai Matveevich Kataev and had been an agronomist. Nikolai had been exiled from St. Petersburg to the Yaroslavl province after his participation in the movement against the czars. He disappeared in 1919 and he was presumed to have killed in the Russian Civil War. Andrey Kolmogorov was educated in his aunt Veras village school, and his earliest literary efforts, Andrey was the editor of the mathematical section of this journal. In 1910, his aunt adopted him, and they moved to Moscow, later that same year, Kolmogorov began to study at the Moscow State University and at the same time Mendeleev Moscow Institute of Chemistry and Technology. Kolmogorov writes about this time, I arrived at Moscow University with a knowledge of mathematics. I knew in particular the beginning of set theory, I studied many questions in articles in the Encyclopedia of Brockhaus and Efron, filling out for myself what was presented too concisely in these articles. Kolmogorov gained a reputation for his wide-ranging erudition, during the same period, Kolmogorov worked out and proved several results in set theory and in the theory of Fourier series. In 1922, Kolmogorov gained international recognition for constructing a Fourier series that diverges almost everywhere, around this time, he decided to devote his life to mathematics. In 1925, Kolmogorov graduated from the Moscow State University and began to study under the supervision of Nikolai Luzin, Kolmogorov became interested in probability theory. In 1929, Kolmogorov earned his Doctor of Philosophy degree, from Moscow State University, in 1930, Kolmogorov went on his first long trip abroad, traveling to Göttingen and Munich, and then to Paris. He had various contacts in Göttingen. His pioneering work, About the Analytical Methods of Probability Theory, was published in 1931, also in 1931, he became a professor at the Moscow State University. In 1935, Kolmogorov became the first chairman of the department of probability theory at the Moscow State University, around the same years Kolmogorov contributed to the field of ecology and generalized the Lotka–Volterra model of predator-prey systems. In 1936, Kolmogorov and Alexandrov were involved in the persecution of their common teacher Nikolai Luzin, in the so-called Luzin affair. In a 1938 paper, Kolmogorov established the basic theorems for smoothing and predicting stationary stochastic processes—a paper that had military applications during the Cold War
Kolmogorov
–
Andrey Kolmogorov
Kolmogorov
–
Kolmogorov (left) delivers a talk at a Soviet information theory symposium. (Tallinn, 1973).
Kolmogorov
–
Kolmogorov works on his talk (Tallinn, 1973).
40.
Financial regulation
–
This may be handled by either a government or non-government organization. Financial regulation has also influenced the structure of banking sectors, by decreasing borrowing costs, reduction of financial crime – reducing the extent to which it is possible for a regulated business to be used for a purpose connected with financial crime. Regulating foreign participation in the financial markets, ati mathunzi // sup // Acts empower organizations, government or non-government, to monitor activities and enforce actions. There are various setups and combinations in place for the regulatory structure around the global. Leaf parts are in any case, Exchange acts ensure that trading on the exchanges is conducted in a proper manner, most prominent the pricing process, execution and settlement of trades, direct and efficient trade monitoring. Financial regulators ensure that companies and market participants comply with various regulations under the trading acts. The trading acts demands that listed companies publish regular financial reports, whereas market participants are required to Publish major shareholder notifications. Asset management supervision or investment acts ensures the frictionless operation of those vehicles, Banking acts lay down rules for banks which they have to observe when they are being established and when they are carrying on their business. These rules are designed to prevent unwelcome developments that might disrupt the functioning of the banking system. Thus ensuring a strong and efficient banking system, the following is a short listing of regulatory authorities in various jurisdictions, for a more complete listing, please see list of financial regulatory authorities by country. Sometimes more than one institution regulates and supervises the banking market, normally because, apart from regulatory authorities, the Eurozone countries are forming a Single Supervisory Mechanism under the European Central Bank as a prelude to Banking union. There are also associations of financial regulatory authorities, in essence, they forced European banks, and, more importantly, the European Central Bank itself e. g. Pr. ISBN 978-0-691-15264-6 Simpson, D. Meeks, G. Klumpes, P. & Andrews, some cost-benefit issues in financial regulation
Financial regulation
41.
Behavioral finance
–
Risk tolerance is a crucial factor in personal financial decision making. Risk tolerance is defined as individuals willingness to engage in a financial activity whose outcome is uncertain, Behavioral economics is primarily concerned with the bounds of rationality of economic agents. Behavioral models typically integrate insights from psychology, neuroscience and microeconomic theory, in so doing, these behavioral models cover a range of concepts, methods, the study of behavioral economics includes how market decisions are made and the mechanisms that drive public choice. The use of the term behavioral economics in U. S. scholarly papers has increased in the past few years, there are three prevalent themes in behavioral finances, Heuristics, People often make decisions based on approximate rules of thumb and not strict logic. Framing, The collection of anecdotes and stereotypes that make up the mental emotional filters individuals rely on to understand and respond to events, Market inefficiencies, These include mis-pricings and non-rational decision making. During the classical period of economics, microeconomics was closely linked to psychology and they developed the concept of homo economicus, whose psychology was fundamentally rational. However, many important neo-classical economists employed more sophisticated psychological explanations, including Francis Edgeworth, Vilfredo Pareto, Economic psychology emerged in the 20th century in the works of Gabriel Tarde, George Katona, and Laszlo Garai. Expected utility and discounted utility models began to gain acceptance, generating testable hypotheses about decision-making given uncertainty and intertemporal consumption, in the 1960s cognitive psychology began to shed more light on the brain as an information processing device. In mathematical psychology, there is a longstanding interest in the transitivity of preference, prospect theory has two stages, an editing stage and an evaluation stage. In the editing stage, risky situations are simplified using various heuristics of choice, outcomes are then compared to the reference point and classified as gains if greater than the reference point and losses if less than the reference point. Loss aversion, Losses bite more than equivalent gains, in their 1979 paper published in Econometrica, Kahneman and Tversky found the median coefficient of loss aversion to be about 2.25, i. e. losses bite about 2.25 times more than equivalent gains. Prospect theory is able to explain everything that the two main existing decision theories—expected utility theory and rank dependent utility theory—can explain, prospect theory has been used to explain a range of phenomena that existing decision theories have great difficulty in explaining. These include backward bending labor supply curves, asymmetric price elasticities, tax evasion, co-movement of stock prices and consumption, in 1992, in the Journal of Risk and Uncertainty, Kahneman and Tversky gave their revised account of prospect theory that they called cumulative prospect theory. The new theory eliminated the editing phase in prospect theory and focused just on the evaluation phase and its main feature was that it allowed for non-linear probability weighting in a cumulative manner, which was originally suggested in John Quiggins rank dependent utility theory. Psychological traits such as overconfidence, projection bias, and the effects of limited attention are now part of the theory, Behavioral economics has also been applied to intertemporal choice. Intertemporal choice is defined as making a decision and having the effects of such decision happening in a different time, hyperbolic discounting describes the tendency to discount outcomes in the near future more than for outcomes in the far future. Other branches of behavioral economics enrich the model of the utility function without implying inconsistency in preferences, ernst Fehr, Armin Falk, and Matthew Rabin studied fairness, inequity aversion, and reciprocal altruism, weakening the neoclassical assumption of perfect selfishness. This work is particularly applicable to wage setting, Behavioral economics caught on among the general public with the success of books such as Dan Arielys Predictably Irrational
Behavioral finance
–
Daniel Kahneman, winner of 2002 Nobel prize in economics
Behavioral finance
–
World GDP (PPP) per capita by country (2014)
42.
Natural language processing
–
The history of NLP generally starts in the 1950s, although work can be found from earlier periods. In 1950, Alan Turing published an article titled Computing Machinery, the Georgetown experiment in 1954 involved fully automatic translation of more than sixty Russian sentences into English. The authors claimed that three or five years, machine translation would be a solved problem. Little further research in translation was conducted until the late 1980s. Using almost no information about human thought or emotion, ELIZA sometimes provided a startlingly human-like interaction, when the patient exceeded the very small knowledge base, ELIZA might provide a generic response, for example, responding to My head hurts with Why do you say your head hurts. During the 1970s many programmers began to write conceptual ontologies, which structured real-world information into computer-understandable data, examples are MARGIE, SAM, PAM, TaleSpin, QUALM, Politics, and Plot Units. During this time, many chatterbots were written including PARRY, Racter, up to the 1980s, most NLP systems were based on complex sets of hand-written rules. Starting in the late 1980s, however, there was a revolution in NLP with the introduction of machine learning algorithms for language processing, some of the earliest-used machine learning algorithms, such as decision trees, produced systems of hard if-then rules similar to existing hand-written rules. The cache language models upon which many speech recognition systems now rely are examples of statistical models. Many of the early successes occurred in the field of machine translation, due especially to work at IBM Research. However, most other systems depended on corpora specifically developed for the tasks implemented by these systems, as a result, a great deal of research has gone into methods of more effectively learning from limited amounts of data. Recent research has focused on unsupervised and semi-supervised learning algorithms. Such algorithms are able to learn from data that has not been hand-annotated with the desired answers, generally, this task is much more difficult than supervised learning, and typically produces less accurate results for a given amount of input data. However, there is an amount of non-annotated data available. Since the so-called statistical revolution in the late 1980s and mid 1990s, formerly, many language-processing tasks typically involved the direct hand coding of rules, which is not in general robust to natural language variation. The machine-learning paradigm calls instead for using statistical inference to automatically learn such rules through the analysis of large corpora of typical real-world examples, Many different classes of machine learning algorithms have been applied to NLP tasks. These algorithms take as input a set of features that are generated from the input data. Some of the algorithms, such as decision trees, produced systems of hard if-then rules similar to the systems of hand-written rules that were then common
Natural language processing
–
An automated online assistant providing customer service on a web page, an example of an application where natural language processing is a major component.
43.
Power set
–
In mathematics, the power set of any set S is the set of all subsets of S, including the empty set and S itself. The power set of a set S is variously denoted as P, ℘, P, ℙ, or, in axiomatic set theory, the existence of the power set of any set is postulated by the axiom of power set. Any subset of P is called a family of sets over S, if S is the set, then the subsets of S are, and hence the power set of S is. If S is a set with |S| = n elements. This fact, which is the motivation for the notation 2S, may be demonstrated simply as follows, First and we write any subset of S in the format where γi,1 ≤ i ≤ n, can take the value of 0 or 1. If γi =1, the element of S is in the subset, otherwise. Clearly the number of subsets that can be constructed this way is 2n as γi ∈. Cantors diagonal argument shows that the set of a set always has strictly higher cardinality than the set itself. In particular, Cantors theorem shows that the set of a countably infinite set is uncountably infinite. The power set of the set of numbers can be put in a one-to-one correspondence with the set of real numbers. The power set of a set S, together with the operations of union, intersection, in fact, one can show that any finite Boolean algebra is isomorphic to the Boolean algebra of the power set of a finite set. For infinite Boolean algebras this is no true, but every infinite Boolean algebra can be represented as a subalgebra of a power set Boolean algebra. The power set of a set S forms a group when considered with the operation of symmetric difference. It can hence be shown that the power set considered together with both of these forms a Boolean ring. In set theory, XY is the set of all functions from Y to X, as 2 can be defined as, 2S is the set of all functions from S to. Hence 2S and P could be considered identical set-theoretically and this notion can be applied to the example above in which S = to see the isomorphism with the binary numbers from 0 to 2n −1 with n being the number of elements in the set. In S, a 1 in the corresponding to the location in the set indicates the presence of the element. The number of subsets with k elements in the set of a set with n elements is given by the number of combinations, C
Power set
–
The elements of the power set of the set { x, y, z } ordered in respect to inclusion.
44.
Joint distribution
–
In the case of only two random variables, this is called a bivariate distribution, but the concept generalizes to any number of random variables, giving a multivariate distribution. The joint probability distribution can be expressed either in terms of a joint cumulative distribution function or in terms of a joint probability density function or joint probability mass function. Consider the flip of two coins, let A and B be discrete random variables associated with the outcomes first. If a coin displays heads then associated random variable is 1, the joint probability density function of A and B defines probabilities for each pair of outcomes. All possible outcomes are, Since each outcome is likely the joint probability density function becomes P =1 /4 when A, B ∈. Since the coin flips are independent, the joint probability density function is the product of the marginals, in general, each coin flip is a Bernoulli trial and the sequence of flips follows a Bernoulli distribution. Consider the roll of a dice and let A =1 if the number is even. Furthermore, let B =1 if the number is prime and B =0 otherwise. Then, the joint distribution of A and B, expressed as a probability function, is P = P =16, P = P =26, P = P =26, P = P =16. These probabilities necessarily sum to 1, since the probability of some combination of A and B occurring is 1. The joint probability function of two discrete random variables X, Y is, P = P ⋅ P = P ⋅ P. Again, since these are probability distributions, one has ∫ x ∫ y f X, Y d y d x =1, formally, fX, Y is the probability density function of with respect to the product measure on the respective supports of X and Y. Two discrete random variables X and Y are independent if the joint probability mass function satisfies P = P ⋅ P for all x and y. Similarly, two absolutely continuous random variables are independent if f X, Y = f X ⋅ f Y for all x and y, such conditional independence relations can be represented with a Bayesian network or copula functions
Joint distribution
–
Many sample observations (black) are shown from a joint probability distribution. The marginal densities are shown as well.
45.
Conditional probability
–
In probability theory, conditional probability is a measure of the probability of an event given that another event has occurred. For example, the probability that any person has a cough on any given day may be only 5%. But if we know or assume that the person has a cold, the conditional probability of coughing given that you have a cold might be a much higher 75%. The concept of probability is one of the most fundamental. But conditional probabilities can be slippery and require careful interpretation. For example, there need not be a causal or temporal relationship between A and B, P may or may not be equal to P. If P = P, then events A and B are said to be independent, in such a case, also, in general, P is not equal to P. For example, if you have cancer you might have a 90% chance of testing positive for cancer. In this case what is being measured is that the if event B having cancer has occurred, alternatively, you can test positive for cancer but you may have only a 10% chance of actually having cancer because cancer is very rare. In this case what is being measured is the probability of the event B - having cancer given that the event A - test is positive has occurred, falsely equating the two probabilities causes various errors of reasoning such as the base rate fallacy. Conditional probabilities can be reversed using Bayes theorem. The logic behind this equation is that if the outcomes are restricted to B, Note that this is a definition but not a theoretical result. We just denote the quantity P / P as P and call it the conditional probability of A given B. Further, this multiplication axiom introduces a symmetry with the axiom for mutually exclusive events, P = P + P − P0 If P =0. However, it is possible to define a probability with respect to a σ-algebra of such events. The case where B has zero measure is problematic, see conditional expectation for more information. Conditioning on an event may be generalized to conditioning on a random variable, Let X be a random variable, we assume for the sake of presentation that X is discrete, that is, X takes on only finitely many values x. The conditional probability of A given X is defined as the variable, written P
Conditional probability
46.
Continuous random variable
–
For instance, if the random variable X is used to denote the outcome of a coin toss, then the probability distribution of X would take the value 0.5 for X = heads, and 0.5 for X = tails. In more technical terms, the probability distribution is a description of a phenomenon in terms of the probabilities of events. Examples of random phenomena can include the results of an experiment or survey, a probability distribution is defined in terms of an underlying sample space, which is the set of all possible outcomes of the random phenomenon being observed. The sample space may be the set of numbers or a higher-dimensional vector space, or it may be a list of non-numerical values, for example. Probability distributions are divided into two classes. A discrete probability distribution can be encoded by a discrete list of the probabilities of the outcomes, on the other hand, a continuous probability distribution is typically described by probability density functions. The normal distribution represents a commonly encountered continuous probability distribution, more complex experiments, such as those involving stochastic processes defined in continuous time, may demand the use of more general probability measures. A probability distribution whose sample space is the set of numbers is called univariate. Important and commonly encountered univariate probability distributions include the distribution, the hypergeometric distribution. The multivariate normal distribution is a commonly encountered multivariate distribution, to define probability distributions for the simplest cases, one needs to distinguish between discrete and continuous random variables. For example, the probability that an object weighs exactly 500 g is zero. Continuous probability distributions can be described in several ways, the cumulative distribution function is the antiderivative of the probability density function provided that the latter function exists. As probability theory is used in diverse applications, terminology is not uniform. The following terms are used for probability distribution functions, Distribution. Probability distribution, is a table that displays the probabilities of outcomes in a sample. Could be called a frequency distribution table, where all occurrences of outcomes sum to 1. Distribution function, is a form of frequency distribution table. Probability distribution function, is a form of probability distribution table
Continuous random variable
–
The probability mass function (pmf) p (S) specifies the probability distribution for the sum S of counts from two dice. For example, the figure shows that p (11) = 1/18. The pmf allows the computation of probabilities of events such as P (S > 9) = 1/12 + 1/18 + 1/36 = 1/6, and all other probabilities in the distribution.
47.
Inverse probability
–
In probability theory, inverse probability is an obsolete term for the probability distribution of an unobserved variable. The development of the field and terminology from inverse probability to Bayesian probability is described by Fienberg, the term inverse probability appears in an 1837 paper of De Morgan, in reference to Laplaces method of probability, though the term inverse probability does not occur in these. Later Jeffreys uses the term in his defense of the methods of Bayes and Laplace, the term Bayesian, which displaced inverse probability, was introduced by Ronald Fisher around 1950. Inverse probability, variously interpreted, was the dominant approach to statistics until the development of frequentism in the early 20th century by Ronald Fisher, Jerzy Neyman and Egon Pearson. Following the development of frequentism, the terms frequentist and Bayesian developed to contrast these approaches, the distribution p itself is called the direct probability. The inverse probability problem was the problem of estimating a parameter from experimental data in the sciences, especially astronomy. A simple example would be the problem of estimating the position of a star in the sky for purposes of navigation, given the data, one must estimate the true position. This problem would now be considered one of inferential statistics, the terms direct probability and inverse probability were in use until the middle part of the 20th century, when the terms likelihood function and posterior distribution became prevalent
Inverse probability
–
Ronald Fisher
48.
Newtonian mechanics
–
In physics, classical mechanics is one of the two major sub-fields of mechanics, along with quantum mechanics. Classical mechanics is concerned with the set of physical laws describing the motion of bodies under the influence of a system of forces. The study of the motion of bodies is an ancient one, making classical mechanics one of the oldest and largest subjects in science, engineering and technology. Classical mechanics describes the motion of objects, from projectiles to parts of machinery, as well as astronomical objects, such as spacecraft, planets, stars. Within classical mechanics are fields of study that describe the behavior of solids, liquids and gases, Classical mechanics also provides extremely accurate results as long as the domain of study is restricted to large objects and the speeds involved do not approach the speed of light. When both quantum and classical mechanics cannot apply, such as at the level with high speeds. Since these aspects of physics were developed long before the emergence of quantum physics and relativity, however, a number of modern sources do include relativistic mechanics, which in their view represents classical mechanics in its most developed and accurate form. Later, more abstract and general methods were developed, leading to reformulations of classical mechanics known as Lagrangian mechanics and these advances were largely made in the 18th and 19th centuries, and they extend substantially beyond Newtons work, particularly through their use of analytical mechanics. The following introduces the concepts of classical mechanics. For simplicity, it often models real-world objects as point particles, the motion of a point particle is characterized by a small number of parameters, its position, mass, and the forces applied to it. Each of these parameters is discussed in turn, in reality, the kind of objects that classical mechanics can describe always have a non-zero size. Objects with non-zero size have more complicated behavior than hypothetical point particles, because of the degrees of freedom. However, the results for point particles can be used to such objects by treating them as composite objects. The center of mass of a composite object behaves like a point particle, Classical mechanics uses common-sense notions of how matter and forces exist and interact. It assumes that matter and energy have definite, knowable attributes such as where an object is in space, non-relativistic mechanics also assumes that forces act instantaneously. The position of a point particle is defined with respect to a fixed reference point in space called the origin O, in space. A simple coordinate system might describe the position of a point P by means of a designated as r. In general, the point particle need not be stationary relative to O, such that r is a function of t, the time
Newtonian mechanics
–
Sir Isaac Newton (1643–1727), an influential figure in the history of physics and whose three laws of motion form the basis of classical mechanics
Newtonian mechanics
–
Diagram of orbital motion of a satellite around the earth, showing perpendicular velocity and acceleration (force) vectors.
Newtonian mechanics
–
Hamilton 's greatest contribution is perhaps the reformulation of Newtonian mechanics, now called Hamiltonian mechanics.
49.
Chaos theory
–
Chaos theory is a branch of mathematics focused on the behavior of dynamical systems that are highly sensitive to initial conditions. This happens even though these systems are deterministic, meaning that their behavior is fully determined by their initial conditions. In other words, the nature of these systems does not make them predictable. This behavior is known as chaos, or simply chaos. The theory was summarized by Edward Lorenz as, Chaos, When the present determines the future, Chaotic behavior exists in many natural systems, such as weather and climate. It also occurs spontaneously in some systems with components, such as road traffic. This behavior can be studied through analysis of a mathematical model, or through analytical techniques such as recurrence plots. Chaos theory has applications in several disciplines, including meteorology, sociology, physics, environmental science, computer science, engineering, economics, biology, ecology, the theory formed the basis for such fields of study as complex dynamical systems, edge of chaos theory, self-assembly process. Chaos theory concerns deterministic systems whose behavior can in principle be predicted, Chaotic systems are predictable for a while and then appear to become random. Some examples of Lyapunov times are, chaotic electrical circuits, about 1 millisecond, weather systems, a few days, in chaotic systems, the uncertainty in a forecast increases exponentially with elapsed time. Hence, mathematically, doubling the forecast time more than squares the proportional uncertainty in the forecast and this means, in practice, a meaningful prediction cannot be made over an interval of more than two or three times the Lyapunov time. When meaningful predictions cannot be made, the system appears random, in common usage, chaos means a state of disorder. However, in theory, the term is defined more precisely. Although no universally accepted mathematical definition of chaos exists, a commonly used definition originally formulated by Robert L, in these cases, while it is often the most practically significant property, sensitivity to initial conditions need not be stated in the definition. If attention is restricted to intervals, the second property implies the other two, an alternative, and in general weaker, definition of chaos uses only the first two properties in the above list. Sensitivity to initial conditions means that each point in a system is arbitrarily closely approximated by other points with significantly different future paths. Thus, a small change, or perturbation, of the current trajectory may lead to significantly different future behavior. C. Entitled Predictability, Does the Flap of a Butterflys Wings in Brazil set off a Tornado in Texas, the flapping wing represents a small change in the initial condition of the system, which causes a chain of events leading to large-scale phenomena
Chaos theory
–
The Lorenz attractor displays chaotic behavior. These two plots demonstrate sensitive dependence on initial conditions within the region of phase space occupied by the attractor.
Chaos theory
–
A plot of Lorenz attractor for values r = 28, σ = 10, b = 8/3
Chaos theory
–
Turbulence in the tip vortex from an airplane wing. Studies of the critical point beyond which a system creates turbulence were important for chaos theory, analyzed for example by the Soviet physicist Lev Landau, who developed the Landau-Hopf theory of turbulence. David Ruelle and Floris Takens later predicted, against Landau, that fluid turbulence could develop through a strange attractor, a main concept of chaos theory.
Chaos theory
–
A conus textile shell, similar in appearance to Rule 30, a cellular automaton with chaotic behaviour.
50.
Roulette
–
Roulette is a casino game named after the French word meaning little wheel. The ball eventually loses momentum and falls onto the wheel and into one of 37 or 38 colored and numbered pockets on the wheel The first form of roulette was devised in 18th century France. A century earlier, Blaise Pascal introduced a form of roulette in the 17th century in his search for a perpetual motion machine. The game has played in its present form since as early as 1796 in Paris. The description included the house pockets, There are exactly two slots reserved for the bank, whence it derives its sole mathematical advantage and it then goes on to describe the layout with. two betting spaces containing the banks two numbers, zero and double zero. The book was published in 1801, an even earlier reference to a game of this name was published in regulations for New France in 1758, which banned the games of dice, hoca, faro, and roulette. The roulette wheels used in the casinos of Paris in the late 1790s had red for the single zero, to avoid confusion, the color green was selected for the zeros in roulette wheels starting in the 1800s. In some forms of early American roulette wheels - as shown in the 1886 Hoyle gambling books, there were numbers 1 through 28, plus a single zero, a zero. The Eagle slot, which was a symbol of American liberty, was a slot that brought the casino extra edge. Soon, the tradition vanished and since then the features only numbered slots. Existing wheels with Eagle symbols are rare, with fewer than a half-dozen copies known to exist. Authentic Eagled wheels in excellent condition can fetch tens of thousands of dollars at auction, in the 19th century, roulette spread all over Europe and the US, becoming one of the most famous and most popular casino games. A legend says that François Blanc supposedly bargained with the devil to obtain the secrets of roulette, the legend is based on the fact that the sum of all the numbers on the roulette wheel is 666, which is the Number of the Beast. In the United States, the French double zero wheel made its way up the Mississippi from New Orleans and this eventually evolved into the American style roulette game as different from the traditional French game. The American game developed in the gambling dens across the new territories where makeshift games had been set up, whereas the French game evolved with style and leisure in Monte Carlo. However, it is the American style layout with its simplified betting and fast cash action, using either a single or double zero wheel, that now dominates in most casinos around the world. During the first part of the 20th century, the only towns of note were Monte Carlo with the traditional single zero French wheel. In the 1970s, casinos began to flourish around the world, by 2008 there were several hundred casinos worldwide offering roulette games
Roulette
–
Roulette ball
Roulette
–
French roulette
Roulette
–
"Gwendolen at the roulette table" - 1910 illustration to George Eliot ' " Daniel Deronda ".
Roulette
–
18th century E.O. wheel with gamblers
51.
Quantum mechanics
–
Quantum mechanics, including quantum field theory, is a branch of physics which is the fundamental theory of nature at small scales and low energies of atoms and subatomic particles. Classical physics, the physics existing before quantum mechanics, derives from quantum mechanics as an approximation valid only at large scales, early quantum theory was profoundly reconceived in the mid-1920s. The reconceived theory is formulated in various specially developed mathematical formalisms, in one of them, a mathematical function, the wave function, provides information about the probability amplitude of position, momentum, and other physical properties of a particle. In 1803, Thomas Young, an English polymath, performed the famous experiment that he later described in a paper titled On the nature of light. This experiment played a role in the general acceptance of the wave theory of light. In 1838, Michael Faraday discovered cathode rays, Plancks hypothesis that energy is radiated and absorbed in discrete quanta precisely matched the observed patterns of black-body radiation. In 1896, Wilhelm Wien empirically determined a distribution law of black-body radiation, ludwig Boltzmann independently arrived at this result by considerations of Maxwells equations. However, it was only at high frequencies and underestimated the radiance at low frequencies. Later, Planck corrected this model using Boltzmanns statistical interpretation of thermodynamics and proposed what is now called Plancks law, following Max Plancks solution in 1900 to the black-body radiation problem, Albert Einstein offered a quantum-based theory to explain the photoelectric effect. Among the first to study quantum phenomena in nature were Arthur Compton, C. V. Raman, robert Andrews Millikan studied the photoelectric effect experimentally, and Albert Einstein developed a theory for it. In 1913, Peter Debye extended Niels Bohrs theory of structure, introducing elliptical orbits. This phase is known as old quantum theory, according to Planck, each energy element is proportional to its frequency, E = h ν, where h is Plancks constant. Planck cautiously insisted that this was simply an aspect of the processes of absorption and emission of radiation and had nothing to do with the reality of the radiation itself. In fact, he considered his quantum hypothesis a mathematical trick to get the right rather than a sizable discovery. He won the 1921 Nobel Prize in Physics for this work, Einstein further developed this idea to show that an electromagnetic wave such as light could also be described as a particle, with a discrete quantum of energy that was dependent on its frequency. The Copenhagen interpretation of Niels Bohr became widely accepted, in the mid-1920s, developments in quantum mechanics led to its becoming the standard formulation for atomic physics. In the summer of 1925, Bohr and Heisenberg published results that closed the old quantum theory, out of deference to their particle-like behavior in certain processes and measurements, light quanta came to be called photons. From Einsteins simple postulation was born a flurry of debating, theorizing, thus, the entire field of quantum physics emerged, leading to its wider acceptance at the Fifth Solvay Conference in 1927
Quantum mechanics
–
Max Planck is considered the father of the quantum theory.
Quantum mechanics
–
Solution to Schrödinger's equation for the hydrogen atom at different energy levels. The brighter areas represent a higher probability of finding an electron
Quantum mechanics
–
The 1927 Solvay Conference in Brussels.
52.
Max Born
–
Max Born was a German physicist and mathematician who was instrumental in the development of quantum mechanics. He also made contributions to physics and optics and supervised the work of a number of notable physicists in the 1920s and 1930s. Born won the 1954 Nobel Prize in Physics for his research in Quantum Mechanics. He wrote his Ph. D. thesis on the subject of Stability of Elastica in a Plane and Space, in 1905, he began researching special relativity with Minkowski, and subsequently wrote his habilitation thesis on the Thomson model of the atom. In the First World War, after originally being placed as a radio operator, in 1921, Born returned to Göttingen, arranging another chair for his long-time friend and colleague James Franck. Under Born, Göttingen became one of the worlds foremost centres for physics, in 1925, Born and Werner Heisenberg formulated the matrix mechanics representation of quantum mechanics. The following year, he formulated the now-standard interpretation of the probability density function for ψ*ψ in the Schrödinger equation and his influence extended far beyond his own research. Max Delbrück, Siegfried Flügge, Friedrich Hund, Pascual Jordan, Maria Goeppert-Mayer, Lothar Wolfgang Nordheim, Robert Oppenheimer, in January 1933, the Nazi Party came to power in Germany, and Born, who was Jewish, was suspended. Max Born became a naturalised British subject on 31 August 1939 and he remained at Edinburgh until 1952. He retired to Bad Pyrmont, in West Germany, and died in a hospital in Göttingen on 5 January 1970. Max Born was born on 11 December 1882 in Breslau, which at the time of Borns birth was part of the Prussian Province of Silesia in the German Empire and she died when Max was four years old, on 29 August 1886. Max had a sister, Käthe, who was born in 1884, Wolfgang later became Professor of Art History at the City College of New York. Initially educated at the König-Wilhelm-Gymnasium in Breslau, Born entered the University of Breslau in 1901, the German university system allowed students to move easily from one university to another, so he spent summer semesters at Heidelberg University in 1902 and the University of Zurich in 1903. Fellow students at Breslau, Otto Toeplitz and Ernst Hellinger, told Born about the University of Göttingen, at Göttingen he found three renowned mathematicians, Felix Klein, David Hilbert and Hermann Minkowski. Very soon after his arrival, Born formed close ties to the two men. Being class scribe put Born into regular, invaluable contact with Hilbert, Hilbert became Borns mentor after selecting him to be the first to hold the unpaid, semi-official position of assistant. Borns introduction to Minkowski came through Borns stepmother, Bertha, as she knew Minkowski from dancing classes in Königsberg, the introduction netted Born invitations to the Minkowski household for Sunday dinners. In addition, while performing his duties as scribe and assistant, Borns relationship with Klein was more problematic
Max Born
–
Max Born (1882–1970)
Max Born
–
Solvay Conference, 1927. Born is second from the right in the second row, between Louis de Broglie and Niels Bohr.
Max Born
–
Born's gravestone in Göttingen is inscribed with the uncertainty principle, which he put on rigid mathematical footing.
53.
Reality
–
Reality is the state of things as they actually exist, rather than as they may appear or might be imagined. Reality includes everything that is and has been, whether or not it is observable or comprehensible, a still broader definition includes that which has existed, exists, or will exist. By contrast existence is restricted solely to that which has physical existence or has a direct basis in it in the way that thoughts do in the brain. Reality is often contrasted with what is imaginary, illusory, delusional, in the mind, dreams, what is false, what is fictional, at the same time, what is abstract plays a role both in everyday life and in academic research. For instance, causality, virtue, life, and distributive justice are abstract concepts that can be difficult to define, but they are only rarely equated with pure delusions. Both the existence and reality of abstractions are in dispute, one extreme position regards them as mere words and this disagreement is the basis of the philosophical problem of universals. The truth refers to what is real, while falsity refers to what is not, a common colloquial usage would have reality mean perceptions, beliefs, and attitudes toward reality, as in My reality is not your reality. This is often used just as a colloquialism indicating that the parties to a conversation agree, or should agree, for example, in a religious discussion between friends, one might say, You might disagree, but in my reality, everyone goes to heaven. Reality can be defined in a way that links it to world views or parts of them, Reality is the totality of all things, structures, events and phenomena and it is what a world view ultimately attempts to describe or map. Certain ideas from physics, philosophy, sociology, literary criticism, one such belief is that there simply and literally is no reality beyond the perceptions or beliefs we each have about reality. Many of the concepts of science and philosophy are often defined culturally and socially and this idea was elaborated by Thomas Kuhn in his book The Structure of Scientific Revolutions. The Social Construction of Reality, a book about the sociology of knowledge written by Peter L. Berger and it explained how knowledge is acquired and used for the comprehension of reality. Out of all the realities, the reality of life is the most important one since our consciousness requires us to be completely aware. Philosophy addresses two different aspects of the topic of reality, the nature of reality itself, and the relationship between the mind and reality. On the one hand, ontology is the study of being, and the topic of the field is couched, variously, in terms of being, existence, what is. The task in ontology is to describe the most general categories of reality, if a philosopher wanted to proffer a positive definition of the concept reality, it would be done under this heading. As explained above, some philosophers draw a distinction between reality and existence, in fact, many analytic philosophers today tend to avoid the term real and reality in discussing ontological issues. But for those who would treat is real the same way they treat exists and it has been widely held by analytic philosophers that it is not a property at all, though this view has lost some ground in recent decades
Reality
–
Reality-Virtuality Continuum.
54.
Class membership probabilities
–
Probabilistic classifiers provide classification with a degree of certainty, which can be useful in its own right, or when combining classifiers into ensembles. Probabilistic classifiers generalize this notion of classifiers, instead of functions, they are conditional distributions Pr, meaning that for a given x ∈ X, they assign probabilities to all y ∈ Y. Hard classification can then be using the optimal decision rule y ^ = arg max y Pr or, in English. Binary probabilistic classifiers are also called binomial regression models in statistics, in econometrics, probabilistic classification in general is called discrete choice. Some classification models, such as naive Bayes, logistic regression, other models such as support vector machines are not, but methods exist to turn them into probabilistic classifiers. Some models, such as regression, are conditionally trained. Not all classification models are probabilistic, and some that are, notably naive Bayes classifiers, decision trees and boosting methods. For classification models that produce some kind of score on their outputs, for the binary case, a common approach is to apply Platt scaling, which learns a logistic regression model on the scores. An alternative method using isotonic regression is generally superior to Platts method when sufficient training data is available, commonly used loss functions for probabilistic classification include log loss and the mean squared error between the predicted and the true probability distributions. The former of these is used to train logistic models. A method used to assign scores to pairs of predicted probabilities and actual discrete outcomes, so that different predictive methods can be compared, is called a scoring rule
Class membership probabilities
–
Machine learning and data mining
55.
Heuristics in judgment and decision-making
–
In psychology, heuristics are simple, efficient rules which people often use to form judgments and make decisions. They are mental shortcuts that usually involve focusing on one aspect of a complex problem and these rules work well under most circumstances, but they can lead to systematic deviations from logic, probability or rational choice theory. The resulting errors are called cognitive biases and many different types have been documented and these have been shown to affect peoples choices in situations like valuing a house, deciding the outcome of a legal case, or making an investment decision. Heuristics usually govern automatic, intuitive judgments but can also be used as deliberate mental strategies when working from limited information, Cognitive scientist Herbert A. Simon originally proposed that human judgments are limited by available information, time contraints, and cognitive limitations, calling this bounded rationality. In the early 1970s, psychologists Amos Tversky and Daniel Kahneman demonstrated three heuristics that underlie a range of intuitive judgments. These findings set in motion the heuristics and biases research program, which studies how people make real-world judgments and this research challenged the idea that human beings are rational actors, but provided a theory of information processing to explain how people make estimates or choices. This heuristics-and-biases tradition has been criticised by Gerd Gigerenzer and others for being too focused on how heuristics lead to errors, the critics argue that heuristics can be seen as rational in an underlying sense. According to this perspective, heuristics are good enough for most purposes without being too demanding on the brains resources, another theoretical perspective sees heuristics as fully rational in that they are rapid, can be made without full information and can be as accurate as more complicated procedures. By understanding the role of heuristics in human psychology, marketers and other persuaders can influence decisions, in their initial research, Tversky and Kahneman proposed three heuristics—availability, representativeness, and anchoring and adjustment. Subsequent work has identified many more, Heuristics that underlie judgment are called judgment heuristics. Another type, called heuristics, are used to judge the desirability of possible choices. In psychology, availability is the ease with which an idea can be brought to mind. When people estimate how likely or how frequent an event is on the basis of its availability, when an infrequent event can be brought easily and vividly to mind, people tend to overestimate its likelihood. For example, people overestimate their likelihood of dying in an event such as a tornado or terrorism. Dramatic, violent deaths are more highly publicised and therefore have a higher availability. On the other hand, common but mundane events are hard to bring to mind and these include deaths from suicides, strokes, and diabetes. This heuristic is one of the reasons why people are easily swayed by a single. It may also play a role in the appeal of lotteries, to buying a ticket
Heuristics in judgment and decision-making
–
The amount of money people will pay in an auction for a bottle of wine can be influenced by considering an arbitrary two-digit number.
Heuristics in judgment and decision-making
–
A visual example of attribute substitution. This illusion works because the 2D size of parts of the scene is judged on the basis of 3D (perspective) size, which is rapidly calculated by the visual system.
56.
Probability density function
–
In a more precise sense, the PDF is used to specify the probability of the random variable falling within a particular range of values, as opposed to taking on any one value. The probability density function is everywhere, and its integral over the entire space is equal to one. The terms probability distribution function and probability function have also sometimes used to denote the probability density function. However, this use is not standard among probabilists and statisticians, further confusion of terminology exists because density function has also been used for what is here called the probability mass function. In general though, the PMF is used in the context of random variables. Suppose a species of bacteria typically lives 4 to 6 hours, what is the probability that a bacterium lives exactly 5 hours. A lot of bacteria live for approximately 5 hours, but there is no chance that any given bacterium dies at exactly 5.0000000000, instead we might ask, What is the probability that the bacterium dies between 5 hours and 5.01 hours. Lets say the answer is 0.02, next, What is the probability that the bacterium dies between 5 hours and 5.001 hours. The answer is probably around 0.002, since this is 1/10th of the previous interval, the probability that the bacterium dies between 5 hours and 5.0001 hours is probably about 0.0002, and so on. In these three examples, the ratio / is approximately constant, and equal to 2 per hour, for example, there is 0.02 probability of dying in the 0. 01-hour interval between 5 and 5.01 hours, and =2 hour−1. This quantity 2 hour−1 is called the probability density for dying at around 5 hours, therefore, in response to the question What is the probability that the bacterium dies at 5 hours. A literally correct but unhelpful answer is 0, but an answer can be written as dt. This is the probability that the bacterium dies within a window of time around 5 hours. For example, the probability that it lives longer than 5 hours, there is a probability density function f with f =2 hour−1. The integral of f over any window of time is the probability that the dies in that window. A probability density function is most commonly associated with absolutely continuous univariate distributions, a random variable X has density fX, where fX is a non-negative Lebesgue-integrable function, if, Pr = ∫ a b f X d x. That is, f is any function with the property that. In the continuous univariate case above, the measure is the Lebesgue measure
Probability density function
–
Boxplot and probability density function of a normal distribution N (0, σ 2).
57.
Sample space
–
In probability theory, the sample space of an experiment or random trial is the set of all possible outcomes or results of that experiment. A sample space is denoted using set notation, and the possible outcomes are listed as elements in the set. It is common to refer to a space by the labels S, Ω. For example, if the experiment is tossing a coin, the space is typically the set. For tossing two coins, the sample space would be. For tossing a single six-sided die, the sample space is. A well-defined sample space is one of three elements in a probabilistic model, the other two are a well-defined set of possible events and a probability assigned to each event. For many experiments, there may be more than one plausible sample space available, for example, when drawing a card from a standard deck of fifty-two playing cards, one possibility for the sample space could be the various ranks, while another could be the suits. Still other sample spaces are possible, such as if some cards have been flipped when shuffling, some treatments of probability assume that the various outcomes of an experiment are always defined so as to be equally likely. The result of this is every possible combination of individuals who could be chosen for the sample is also equally likely. In an elementary approach to probability, any subset of the space is usually called an event. However, this rise to problems when the sample space is infinite. Under this definition only measurable subsets of the space, constituting a σ-algebra over the sample space itself, are considered events. Probability space Space Set Event σ-algebra
Sample space
–
Flipping a coin leads to a sample space composed of two outcomes that are almost equally likely.
Sample space
–
Up or down? Flipping a brass tack leads to a sample space composed of two outcomes that are not equally likely.
58.
International Standard Book Number
–
The International Standard Book Number is a unique numeric commercial book identifier. An ISBN is assigned to each edition and variation of a book, for example, an e-book, a paperback and a hardcover edition of the same book would each have a different ISBN. The ISBN is 13 digits long if assigned on or after 1 January 2007, the method of assigning an ISBN is nation-based and varies from country to country, often depending on how large the publishing industry is within a country. The initial ISBN configuration of recognition was generated in 1967 based upon the 9-digit Standard Book Numbering created in 1966, the 10-digit ISBN format was developed by the International Organization for Standardization and was published in 1970 as international standard ISO2108. Occasionally, a book may appear without a printed ISBN if it is printed privately or the author does not follow the usual ISBN procedure, however, this can be rectified later. Another identifier, the International Standard Serial Number, identifies periodical publications such as magazines, the ISBN configuration of recognition was generated in 1967 in the United Kingdom by David Whitaker and in 1968 in the US by Emery Koltay. The 10-digit ISBN format was developed by the International Organization for Standardization and was published in 1970 as international standard ISO2108, the United Kingdom continued to use the 9-digit SBN code until 1974. The ISO on-line facility only refers back to 1978, an SBN may be converted to an ISBN by prefixing the digit 0. For example, the edition of Mr. J. G. Reeder Returns, published by Hodder in 1965, has SBN340013818 -340 indicating the publisher,01381 their serial number. This can be converted to ISBN 0-340-01381-8, the check digit does not need to be re-calculated, since 1 January 2007, ISBNs have contained 13 digits, a format that is compatible with Bookland European Article Number EAN-13s. An ISBN is assigned to each edition and variation of a book, for example, an ebook, a paperback, and a hardcover edition of the same book would each have a different ISBN. The ISBN is 13 digits long if assigned on or after 1 January 2007, a 13-digit ISBN can be separated into its parts, and when this is done it is customary to separate the parts with hyphens or spaces. Separating the parts of a 10-digit ISBN is also done with either hyphens or spaces, figuring out how to correctly separate a given ISBN number is complicated, because most of the parts do not use a fixed number of digits. ISBN issuance is country-specific, in that ISBNs are issued by the ISBN registration agency that is responsible for country or territory regardless of the publication language. Some ISBN registration agencies are based in national libraries or within ministries of culture, in other cases, the ISBN registration service is provided by organisations such as bibliographic data providers that are not government funded. In Canada, ISBNs are issued at no cost with the purpose of encouraging Canadian culture. In the United Kingdom, United States, and some countries, where the service is provided by non-government-funded organisations. Australia, ISBNs are issued by the library services agency Thorpe-Bowker
International Standard Book Number
–
A 13-digit ISBN, 978-3-16-148410-0, as represented by an EAN-13 bar code
59.
Journal of the American Statistical Association
–
The Journal of the American Statistical Association is the primary journal published by the American Statistical Association, the main professional body for statisticians in the United States. It is published four times a year and it had an impact factor of 2.063 in 2010, tenth highest in the Statistics and Probability category of Journal Citation Reports. In a 2003 survey of statisticians, the Journal of the American Statistical Association was ranked first, among all journals, for Applications of Statistics, the predecessor of this journal started in 1888 with the name Publications of the American Statistical Association. It became Quarterly publications of the American Statistical Association in 1912, Journal of the American Statistical Association
Journal of the American Statistical Association
–
Journal of the American Statistical Association
60.
Cambridge University Press
–
Cambridge University Press is the publishing business of the University of Cambridge. Granted letters patent by Henry VIII in 1534, it is the worlds oldest publishing house and it also holds letters patent as the Queens Printer. The Presss mission is To further the Universitys mission by disseminating knowledge in the pursuit of education, learning, Cambridge University Press is a department of the University of Cambridge and is both an academic and educational publisher. With a global presence, publishing hubs, and offices in more than 40 countries. Its publishing includes journals, monographs, reference works, textbooks. Cambridge University Press is an enterprise that transfers part of its annual surplus back to the university. Cambridge University Press is both the oldest publishing house in the world and the oldest university press and it originated from Letters Patent granted to the University of Cambridge by Henry VIII in 1534, and has been producing books continuously since the first University Press book was printed. Cambridge is one of the two privileged presses, authors published by Cambridge have included John Milton, William Harvey, Isaac Newton, Bertrand Russell, and Stephen Hawking. In 1591, Thomass successor, John Legate, printed the first Cambridge Bible, the London Stationers objected strenuously, claiming that they had the monopoly on Bible printing. The universitys response was to point out the provision in its charter to print all manner of books. In July 1697 the Duke of Somerset made a loan of £200 to the university towards the house and presse and James Halman, Registrary of the University. It was in Bentleys time, in 1698, that a body of scholars was appointed to be responsible to the university for the Presss affairs. The Press Syndicates publishing committee still meets regularly, and its role still includes the review, John Baskerville became University Printer in the mid-eighteenth century. Baskervilles concern was the production of the finest possible books using his own type-design, a technological breakthrough was badly needed, and it came when Lord Stanhope perfected the making of stereotype plates. This involved making a mould of the surface of a page of type. The Press was the first to use this technique, and in 1805 produced the technically successful, under the stewardship of C. J. Clay, who was University Printer from 1854 to 1882, the Press increased the size and scale of its academic and educational publishing operation. An important factor in this increase was the inauguration of its list of schoolbooks, during Clays administration, the Press also undertook a sizable co-publishing venture with Oxford, the Revised Version of the Bible, which was begun in 1870 and completed in 1885. It was Wright who devised the plan for one of the most distinctive Cambridge contributions to publishing—the Cambridge Histories, the Cambridge Modern History was published between 1902 and 1912
Cambridge University Press
–
The University Printing House, on the main site of the Press
Cambridge University Press
–
The letters patent of Cambridge University Press by Henry VIII allow the Press to print "all manner of books". The fine initial with the king's portrait inside it and the large first line of script are still discernible.
Cambridge University Press
–
The Pitt Building in Cambridge, which used to be the headquarters of Cambridge University Press, and now serves as a conference centre for the Press.
Cambridge University Press
–
On the main site of the Press
61.
Edwin Thompson Jaynes
–
Edwin Thompson Jaynes was the Wayman Crow Distinguished Professor of Physics at Washington University in St. Louis. Jaynes strongly promoted the interpretation of probability theory as an extension of logic, in 1963, together with Fred Cummings, he modeled the evolution of a two-level atom in an electromagnetic field, in a fully quantized way. This model is known as the Jaynes–Cummings model, other contributions include the mind projection fallacy. This book was published posthumously in 2003, an unofficial list of errata is hosted by Kevin S. Van Horn. Edwin Thompson Jaynes at the Mathematics Genealogy Project Edwin Thompson Jaynes, Probability Theory, The Logic of Science. Early version of Probability Theory, The Logic of Science, book no longer downloadable for copyright reasons. A comprehensive web page on E. T. Jayness life, ET Jaynes obituary at Washington university http, //bayes. wustl. edu/etj/articles/entropy. concentration. pdf Jaynes analysis of Rudolph Wolfs dice data
Edwin Thompson Jaynes
–
Edwin Thompson Jaynes (1922–1998), photo taken circa 1960.
62.
An Anthology of Chance Operations
–
An Anthology of Chance Operations was an artists book publication from the early 1960s of experimental neodada art and music composition that used John Cage inspired indeterminacy. It was edited by La Monte Young and DIY co-published in 1963 by Young, the project became the manifestation of the original impetus for establishing Fluxus. Given free rein to include whoever and whatever he wanted, Young collected a body of new and experimental music, anti art, poetry, essays and performance scores from America, Europe. The magazine, however, folded after one issue. Although it can be argued that An Anthology is not strictly a Fluxus publication, its development and it was the first collaborative publication project between people who were to become part of Fluxus, Young, Mac Low and Maciunas. The art dealer Heiner Friedrich issued an edition in 1970. Malka Safro Simone Forti Nam June Paik Terry Riley Dieter Roth James Waring Emmett Williams Christian Wolff La Monte Young Notes An Anthology of Chance Operations PDF
An Anthology of Chance Operations
–
Book cover.
63.
GNU Free Documentation License
–
The GNU Free Documentation License is a copyleft license for free documentation, designed by the Free Software Foundation for the GNU Project. It is similar to the GNU General Public License, giving readers the rights to copy, redistribute, copies may also be sold commercially, but, if produced in larger quantities, the original document or source code must be made available to the works recipient. The GFDL was designed for manuals, textbooks, other reference and instructional materials, however, it can be used for any text-based work, regardless of subject matter. For example, the online encyclopedia Wikipedia uses the GFDL for all of its text. The GFDL was released in form for feedback in September 1999. After revisions, version 1.1 was issued in March 2000, version 1.2 in November 2002, the current state of the license is version 1.3. The first discussion draft of the GNU Free Documentation License version 2 was released on September 26,2006, material licensed under the current version of the license can be used for any purpose, as long as the use meets certain conditions. All previous authors of the work must be attributed, all changes to the work must be logged. All derivative works must be licensed under the same license, the full text of the license, unmodified invariant sections as defined by the author if any, and any other added warranty disclaimers and copyright notices from previous versions must be maintained. Technical measures such as DRM may not be used to control or obstruct distribution or editing of the document, the license explicitly separates any kind of Document from Secondary Sections, which may not be integrated with the Document, but exist as front-matter materials or appendices. Secondary sections can contain information regarding the authors or publishers relationship to the subject matter, if the material is modified, its title has to be changed. The license also has provisions for the handling of front-cover and back-cover texts of books, as well as for History, Acknowledgements, Dedications and Endorsements sections. These features were added in part to make the more financially attractive to commercial publishers of software documentation. Endorsements sections are intended to be used in official standard documents, the GFDL requires the ability to copy and distribute the Document in any medium, either commercially or noncommercially and therefore is incompatible with material that excludes commercial re-use. Material that restricts commercial re-use is incompatible with the license and cannot be incorporated into the work, one example of such liberal and commercial fair use is parody. Although the two work on similar copyleft principles, the GFDL is not compatible with the Creative Commons Attribution-ShareAlike license. These exemptions allow a GFDL-based collaborative project with multiple authors to transition to the CC BY-SA3, if it was not originally published on an MMC, it can only be relicensed if it were added to an MMC before November 1,2008. To prevent the clause from being used as a general compatibility measure, at the release of version 1.3, the FSF stated that all content added before November 1,2008 to Wikipedia as an example satisfied the conditions
GNU Free Documentation License
–
The GFDL logo
64.
Logic
–
Logic, originally meaning the word or what is spoken, is generally held to consist of the systematic study of the form of arguments. A valid argument is one where there is a relation of logical support between the assumptions of the argument and its conclusion. Historically, logic has been studied in philosophy and mathematics, and recently logic has been studied in science, linguistics, psychology. The concept of form is central to logic. The validity of an argument is determined by its logical form, traditional Aristotelian syllogistic logic and modern symbolic logic are examples of formal logic. Informal logic is the study of natural language arguments, the study of fallacies is an important branch of informal logic. Since much informal argument is not strictly speaking deductive, on some conceptions of logic, formal logic is the study of inference with purely formal content. An inference possesses a purely formal content if it can be expressed as an application of a wholly abstract rule, that is. The works of Aristotle contain the earliest known study of logic. Modern formal logic follows and expands on Aristotle, in many definitions of logic, logical inference and inference with purely formal content are the same. This does not render the notion of informal logic vacuous, because no formal logic captures all of the nuances of natural language, Symbolic logic is the study of symbolic abstractions that capture the formal features of logical inference. Symbolic logic is divided into two main branches, propositional logic and predicate logic. Mathematical logic is an extension of logic into other areas, in particular to the study of model theory, proof theory, set theory. Logic is generally considered formal when it analyzes and represents the form of any valid argument type, the form of an argument is displayed by representing its sentences in the formal grammar and symbolism of a logical language to make its content usable in formal inference. Simply put, formalising simply means translating English sentences into the language of logic and this is called showing the logical form of the argument. It is necessary because indicative sentences of ordinary language show a variety of form. Second, certain parts of the sentence must be replaced with schematic letters, thus, for example, the expression all Ps are Qs shows the logical form common to the sentences all men are mortals, all cats are carnivores, all Greeks are philosophers, and so on. The schema can further be condensed into the formula A, where the letter A indicates the judgement all - are -, the importance of form was recognised from ancient times
Logic
–
Aristotle, 384–322 BCE.
65.
History of logic
–
The history of logic deals with the study of the development of the science of valid inference. Formal logics developed in ancient times in China, India, Greek methods, particularly Aristotelian logic as found in the Organon, found wide application and acceptance in Western science and mathematics for millennia. The Stoics, especially Chrysippus, began the development of predicate logic, christian and Islamic philosophers such as Boethius and William of Ockham further developed Aristotles logic in the Middle Ages, reaching a high point in the mid-fourteenth century. The period between the fourteenth century and the beginning of the century saw largely decline and neglect. Empirical methods ruled the day, as evidenced by Sir Francis Bacons Novum Organon of 1620, valid reasoning has been employed in all periods of human history. However, logic studies the principles of reasoning, inference. It is probable that the idea of demonstrating a conclusion first arose in connection with geometry, the ancient Egyptians discovered geometry, including the formula for the volume of a truncated pyramid. Ancient Babylon was also skilled in mathematics, while the ancient Egyptians empirically discovered some truths of geometry, the great achievement of the ancient Greeks was to replace empirical methods by demonstrative proof. Both Thales and Pythagoras of the Pre-Socratic philosophers seem aware of geometrys methods, fragments of early proofs are preserved in the works of Plato and Aristotle, and the idea of a deductive system was probably known in the Pythagorean school and the Platonic Academy. The proofs of Euclid of Alexandria are a paradigm of Greek geometry, the three basic principles of geometry are as follows, Certain propositions must be accepted as true without demonstration, such a proposition is known as an axiom of geometry. Every proposition that is not an axiom of geometry must be demonstrated as following from the axioms of geometry, the proof must be formal, that is, the derivation of the proposition must be independent of the particular subject matter in question. Further evidence that early Greek thinkers were concerned with the principles of reasoning is found in the fragment called dissoi logoi and this is part of a protracted debate about truth and falsity. Thales was said to have had a sacrifice in celebration of discovering Thales Theorem just as Pythagoras had the Pythagorean Theorem, Indian and Babylonian mathematicians knew his theorem for special cases before he proved it. It is believed that Thales learned that an angle inscribed in a semicircle is a right angle during his travels to Babylon, before 520 BC, on one of his visits to Egypt or Greece, Pythagoras might have met the c.54 years older Thales. The systematic study of proof seems to have begun with the school of Pythagoras in the sixth century BC. Indeed, the Pythagoreans, believing all was number, are the first philosophers to emphasize rather than matter. He is known for his obscure sayings and this logos holds always but humans always prove unable to understand it, both before hearing it and when they have first heard it. But other people fail to notice what they do when awake, in contrast to Heraclitus, Parmenides held that all is one and nothing changes
History of logic
–
Plato's academy
History of logic
–
Aristotle's logic was still influential in the Renaissance
History of logic
–
Chrysippus of Soli
History of logic
–
A text by Avicenna, founder of Avicennian logic
66.
Set theory
–
Set theory is a branch of mathematical logic that studies sets, which informally are collections of objects. Although any type of object can be collected into a set, set theory is applied most often to objects that are relevant to mathematics, the language of set theory can be used in the definitions of nearly all mathematical objects. The modern study of set theory was initiated by Georg Cantor, Set theory is commonly employed as a foundational system for mathematics, particularly in the form of Zermelo–Fraenkel set theory with the axiom of choice. Beyond its foundational role, set theory is a branch of mathematics in its own right, contemporary research into set theory includes a diverse collection of topics, ranging from the structure of the real number line to the study of the consistency of large cardinals. Mathematical topics typically emerge and evolve through interactions among many researchers, Set theory, however, was founded by a single paper in 1874 by Georg Cantor, On a Property of the Collection of All Real Algebraic Numbers. Since the 5th century BC, beginning with Greek mathematician Zeno of Elea in the West and early Indian mathematicians in the East, especially notable is the work of Bernard Bolzano in the first half of the 19th century. Modern understanding of infinity began in 1867–71, with Cantors work on number theory, an 1872 meeting between Cantor and Richard Dedekind influenced Cantors thinking and culminated in Cantors 1874 paper. Cantors work initially polarized the mathematicians of his day, while Karl Weierstrass and Dedekind supported Cantor, Leopold Kronecker, now seen as a founder of mathematical constructivism, did not. This utility of set theory led to the article Mengenlehre contributed in 1898 by Arthur Schoenflies to Kleins encyclopedia, in 1899 Cantor had himself posed the question What is the cardinal number of the set of all sets. Russell used his paradox as a theme in his 1903 review of continental mathematics in his The Principles of Mathematics, in 1906 English readers gained the book Theory of Sets of Points by William Henry Young and his wife Grace Chisholm Young, published by Cambridge University Press. The momentum of set theory was such that debate on the paradoxes did not lead to its abandonment, the work of Zermelo in 1908 and Abraham Fraenkel in 1922 resulted in the set of axioms ZFC, which became the most commonly used set of axioms for set theory. The work of such as Henri Lebesgue demonstrated the great mathematical utility of set theory. Set theory is used as a foundational system, although in some areas category theory is thought to be a preferred foundation. Set theory begins with a binary relation between an object o and a set A. If o is a member of A, the notation o ∈ A is used, since sets are objects, the membership relation can relate sets as well. A derived binary relation between two sets is the relation, also called set inclusion. If all the members of set A are also members of set B, then A is a subset of B, for example, is a subset of, and so is but is not. As insinuated from this definition, a set is a subset of itself, for cases where this possibility is unsuitable or would make sense to be rejected, the term proper subset is defined
Set theory
–
Georg Cantor
Set theory
–
A Venn diagram illustrating the intersection of two sets.
67.
Logical consequence
–
Logical consequence is a fundamental concept in logic, which describes the relationship between statements that holds true when one statement logically follows from one or more statements. A valid logical argument is one in which the conclusions are entailed by the premises, the philosophical analysis of logical consequence involves the questions, In what sense does a conclusion follow from its premises. And What does it mean for a conclusion to be a consequence of premises, All of philosophical logic is meant to provide accounts of the nature of logical consequence and the nature of logical truth. Logical consequence is necessary and formal, by way of examples that explain with formal proof and models of interpretation. A sentence is said to be a consequence of a set of sentences, for a given language, if and only if. The most widely prevailing view on how to best account for logical consequence is to appeal to formality and this is to say that whether statements follow from one another logically depends on the structure or logical form of the statements without regard to the contents of that form. Syntactic accounts of logical consequence rely on schemes using inference rules, for instance, we can express the logical form of a valid argument as, All A are B. All C are A. Therefore, all C are B and this argument is formally valid, because every instance of arguments constructed using this scheme are valid. This is in contrast to an argument like Fred is Mikes brothers son, if you know that Q follows logically from P no information about the possible interpretations of P or Q will affect that knowledge. Our knowledge that Q is a consequence of P cannot be influenced by empirical knowledge. Deductively valid arguments can be known to be so without recourse to experience, however, formality alone does not guarantee that logical consequence is not influenced by empirical knowledge. So the a property of logical consequence is considered to be independent of formality. The two prevailing techniques for providing accounts of logical consequence involve expressing the concept in terms of proofs, the study of the syntactic consequence is called proof theory whereas the study of semantic consequence is called model theory. A formula A is a syntactic consequence within some formal system F S of a set Γ of formulas if there is a proof in F S of A from the set Γ. Γ ⊢ F S A Syntactic consequence does not depend on any interpretation of the formal system, or, in other words, the set of the interpretations that make all members of Γ true is a subset of the set of the interpretations that make A true. Modal accounts of logical consequence are variations on the basic idea, Γ ⊢ A is true if and only if it is necessary that if all of the elements of Γ are true. Alternatively, Γ ⊢ A is true if and only if it is impossible for all of the elements of Γ to be true, such accounts are called modal because they appeal to the modal notions of logical necessity and logical possibility. Consider the modal account in terms of the argument given as an example above, the conclusion is a logical consequence of the premises because we cant imagine a possible world where all frogs are green, Kermit is a frog, and Kermit is not green
Logical consequence
–
Tautology
68.
Name
–
A name is a term used for identification. Names can identify a class or category of things, or a thing, either uniquely. A personal name identifies, not necessarily uniquely, an individual human. The name of an entity is sometimes called a proper name and is, when consisting of only one word. Other nouns are sometimes called names or general names. A name can be given to a person, place, or thing, for example, caution must be exercised when translating, for there are ways that one language may prefer one type of name over another. Also, claims to preference or authority can be refuted, the British did not refer to Louis-Napoleon as Napoleon III during his rule. The word name comes from Old English nama, cognate with Old High German namo, Sanskrit नामन्, Latin nomen, Greek ὄνομα, perhaps connected to non-Indo-European terms such as Tamil namam and Proto-Uralic *nime. In the ancient world, particularly in the ancient near-east names were thought to be powerful and to act, in some ways. By invoking a god or spirit by name, one was thought to be able to summon that spirits power for some kind of miracle or magic, in the Old Testament, the names of individuals are meaningful, and a change of name indicates a change of status. For example, the patriarch Abram and his wife Sarai are renamed Abraham, simon was renamed Peter when he was given the Keys to Heaven. This is recounted in the Gospel of Matthew chapter 16, which according to Roman Catholic teaching was when Jesus promised to Saint Peter the power to take binding actions. Throughout the Bible, characters are given names at birth that reflect something of significance or describe the course of their lives, for example, Solomon meant peace, and the king with that name was the first whose reign was without war. Likewise, Joseph named his firstborn son Manasseh, when Joseph also said, “God has made me all my troubles. However, they were known as the child of their father. For example, דוד בן ישי meaning, David, son of Jesse, the Talmud also states that all those who descend to Gehenna will rise in the time of Messiah. However, there are three exceptions, one of which is he who calls another by a derisive nickname, Street names within a city may follow a naming convention, some examples include, In Manhattan, roads that cross the island from east to west are called Streets. Those that run the length of the island are called Avenues, in Ontario, numbered concession roads are east–west whereas lines are north–south routes
Name
–
A cartouche indicates that the Egyptian hieroglyphs enclosed are a royal name.
69.
Paradox
–
A paradox is a statement that, despite apparently sound reasoning from true premises, leads to a self-contradictory or a logically unacceptable conclusion. A paradox involves contradictory yet interrelated elements that exist simultaneously and persist over time, some logical paradoxes are known to be invalid arguments but are still valuable in promoting critical thinking. Some paradoxes have revealed errors in definitions assumed to be rigorous, others, such as Currys paradox, are not yet resolved. Examples outside logic include the Ship of Theseus from philosophy, paradoxes can also take the form of images or other media. Escher featured perspective-based paradoxes in many of his drawings, with walls that are regarded as floors from other points of view, and staircases that appear to climb endlessly. In common usage, the word often refers to statements that may be both true and false i. e. ironic or unexpected, such as the paradox that standing is more tiring than walking. Common themes in paradoxes include self-reference, infinite regress, circular definitions, patrick Hughes outlines three laws of the paradox, Self-reference An example is This statement is false, a form of the liar paradox. The statement is referring to itself, another example of self-reference is the question of whether the barber shaves himself in the barber paradox. One more example would be Is the answer to this question No, contradiction This statement is false, the statement cannot be false and true at the same time. Another example of contradiction is if a man talking to a genie wishes that wishes couldnt come true, vicious circularity, or infinite regress This statement is false, if the statement is true, then the statement is false, thereby making the statement true. Another example of vicious circularity is the group of statements. Other paradoxes involve false statements or half-truths and the biased assumptions. This form is common in howlers, for example, consider a situation in which a father and his son are driving down the road. The car crashes into a tree and the father is killed, the boy is rushed to the nearest hospital where he is prepared for emergency surgery. On entering the suite, the surgeon says, I cant operate on this boy. The apparent paradox is caused by a hasty generalization, for if the surgeon is the boys father, the paradox is resolved if it is revealed that the surgeon is a woman — the boys mother. Paradoxes which are not based on a hidden error generally occur at the fringes of context or language, paradoxes that arise from apparently intelligible uses of language are often of interest to logicians and philosophers. Russells paradox, which shows that the notion of the set of all sets that do not contain themselves leads to a contradiction, was instrumental in the development of modern logic
Paradox
70.
Reason
–
Reason, or an aspect of it, is sometimes referred to as rationality. Reasoning is associated with thinking, cognition, and intellect, along these lines, a distinction is often drawn between discursive reason, reason proper, and intuitive reason, in which the reasoning process—however valid—tends toward the personal and the opaque. Reason, like habit or intuition, is one of the ways by which thinking comes from one idea to a related idea. For example, it is the means by which rational beings understand themselves to think about cause and effect, truth and falsehood, and what is good or bad. It is also identified with the ability to self-consciously change beliefs, attitudes, traditions, and institutions. In contrast to reason as a noun, a reason is a consideration which explains or justifies some event, phenomenon. The field of logic studies ways in which human beings reason formally through argument, the field of automated reasoning studies how reasoning may or may not be modeled computationally. Animal psychology considers the question of whether animals other than humans can reason, the original Greek term was λόγος logos, the root of the modern English word logic but also a word which could mean for example speech or explanation or an account. As a philosophical term logos was translated in its non-linguistic senses in Latin as ratio and this was originally not just a translation used for philosophy, but was also commonly a translation for logos in the sense of an account of money. French raison is derived directly from Latin, and this is the source of the English word reason. Some philosophers, Thomas Hobbes for example, also used the word ratiocination as a synonym for reasoning, Philosophy can be described as a way of life based upon reason, and in the other direction reason has been one of the major subjects of philosophical discussion since ancient times. Reason is often said to be reflexive, or self-correcting, and it has been defined in different ways, at different times, by different thinkers about human nature. Perhaps starting with Pythagoras or Heraclitus, the cosmos is even said to have reason, Reason, by this account, is not just one characteristic that humans happen to have, and that influences happiness amongst other characteristics. Within the human mind or soul, reason was described by Plato as being the monarch which should rule over the other parts, such as spiritedness. Aristotle, Platos student, defined human beings as rational animals and he defined the highest human happiness or well being as a life which is lived consistently, excellently and completely in accordance with reason. The conclusions to be drawn from the discussions of Aristotle and Plato on this matter are amongst the most debated in the history of philosophy. For example, in the neo-platonist account of Plotinus, the cosmos has one soul, which is the seat of all reason, Reason is for Plotinus both the provider of form to material things, and the light which brings individuals souls back into line with their source. The early modern era was marked by a number of significant changes in the understanding of reason, one of the most important of these changes involved a change in the metaphysical understanding of human beings
Reason
–
Francisco de Goya, The Sleep of Reason Produces Monsters (El sueño de la razón produce monstruos), c. 1797
Reason
–
René Descartes
Reason
–
Dan Sperber believes that reasoning in groups is more effective and promotes their evolutionary fitness.
71.
Reference
–
Reference is a relation between objects in which one object designates, or acts as a means by which to connect to or link to, another object. The first object in this relation is said to refer to the second object, the second object, the one to which the first object refers, is called the referent of the first object. In some cases, methods are used that intentionally hide the reference from some observers, References feature in many spheres of human activity and knowledge, and the term adopts shades of meaning particular to the contexts in which it is used. Some of them are described in the sections below, the word reference is derived from Middle English referren, from Middle French référer, from Latin referre, to carry back, formed from the prefix re- and ferre, to bear. A number of words derive from the root, including refer, referee, referential, referent. The verb refer and its derivatives may carry the sense of link to or connect to, another sense is consult, this is reflected in such expressions as reference work, reference desk, job reference, etc. In semantics, reference is generally construed as the relationships between nouns or pronouns and objects that are named by them, hence, the word John refers to the person John. The word it refers to some previously specified object, the object referred to is called the referent of the word. Sometimes the word-object relation is called denotation, the word denotes the object, the converse relation, the relation from object to word, is called exemplification, the object exemplifies what the word denotes. In syntactic analysis, if a word refers to a previous word and this problem led Frege to distinguish between the sense and reference of a word. Some cases seem to be too complicated to be classified within this framework, words can often be meaningful without having a concrete here-and-now referent. Fictional and mythological names such as Bo-Peep and Hercules illustrate this possibility, sign links with absent referents also allow for discussing abstract ideas as well as people and events of the past and future. For those who argue that one cannot directly experience the divine, additionally, certain sects of Judaism and other religions consider it sinful to write, discard, or deface the name of the divine. To avoid this problem, the signifier G-d is sometimes used, the very concept of the linguistic sign is the combination of content and expression, the former of which may refer entities in the world or refer more abstract concepts, e. g. thought. Certain parts of speech exist only to reference, namely anaphora such as pronouns. The subset of reflexives expresses co-reference of two participants in a sentence and these could be the agent and patient, as in The man washed himself, the theme and recipient, as in I showed Mary to herself, or various other possible combinations. In computer science, references are data types that refer to an object elsewhere in memory and are used to construct a variety of data structures. Generally, a reference is a value that enables a program to access the particular data item
Reference
–
The triangle of reference, from the influential book The Meaning of Meaning (1923) by C. K. Ogden and I. A. Richards.
72.
List of paradoxes
–
This is a list of paradoxes, grouped thematically. The grouping is approximate, as paradoxes may fit more than one category. Because of varying definitions of the paradox, some of the following are not considered to be paradoxes by everyone. This list collects only scenarios that have called a paradox by at least one source and have their own article. Although considered paradoxes, some of these are based on fallacious reasoning, informally, the term is often used to describe a counter-intuitive result. Barbershop paradox, The supposition that if one of two simultaneous assumptions leads to a contradiction, the assumption is also disproved leads to paradoxical consequences. Not to be confused with the Barber paradox, what the Tortoise Said to Achilles, Whatever Logic is good enough to tell me is worth writing down. Also known as Carrolls paradox, not to be confused with the paradox of the same name. Catch-22, A situation in which someone is in need of something that can only be had by not being in need of it. A soldier who wants to be declared insane in order to combat is deemed not insane for that very reason. Drinker paradox, In any pub there is a customer of whom it is true to say, if that customer drinks, Paradox of entailment, Inconsistent premises always make an argument valid. Raven paradox, Observing a green apple increases the likelihood of all ravens being black, ross paradox, Disjunction introduction poses a problem for imperative inference by seemingly permitting arbitrary imperatives to be inferred. Unexpected hanging paradox, The day of the hanging will be a surprise, so it cannot happen at all, the surprise examination and Bottle Imp paradox use similar logic. Barber paradox, A barber shaves all and only men who do not shave themselves. Bhartrharis paradox, The thesis that there are things which are unnameable conflicts with the notion that something is named by calling it unnameable. Berry paradox, The phrase the first number not nameable in under ten words appears to name it in nine words, Paradox of the Court, A law student agrees to pay his teacher after winning his first case. The teacher then sues the student for payment, currys paradox, If this sentence is true, then Santa Claus exists. Epimenides paradox, A Cretan says, All Cretans are liars and this paradox works in mainly the same way as the Liar paradox
List of paradoxes
–
Abilene
List of paradoxes
–
The Monty Hall problem: which door do you choose?
73.
Integrated Authority File
–
The Integrated Authority File or GND is an international authority file for the organisation of personal names, subject headings and corporate bodies from catalogues. It is used mainly for documentation in libraries and increasingly also by archives, the GND is managed by the German National Library in cooperation with various regional library networks in German-speaking Europe and other partners. The GND falls under the Creative Commons Zero license, the GND specification provides a hierarchy of high-level entities and sub-classes, useful in library classification, and an approach to unambiguous identification of single elements. It also comprises an ontology intended for knowledge representation in the semantic web, available in the RDF format
Integrated Authority File
–
GND screenshot