Theorem

In mathematics, a theorem is a statement, proven on the basis of established statements, such as other theorems, accepted statements, such as axioms. A theorem is a logical consequence of the axioms; the proof of a mathematical theorem is a logical argument for the theorem statement given in accord with the rules of a deductive system. The proof of a theorem is interpreted as justification of the truth of the theorem statement. In light of the requirement that theorems be proved, the concept of a theorem is fundamentally deductive, in contrast to the notion of a scientific law, experimental. Many mathematical theorems are conditional statements. In this case, the proof deduces the conclusion from conditions called premises. In light of the interpretation of proof as justification of truth, the conclusion is viewed as a necessary consequence of the hypotheses, that the conclusion is true in case the hypotheses are true, without any further assumptions. However, the conditional could be interpreted differently in certain deductive systems, depending on the meanings assigned to the derivation rules and the conditional symbol.

Although they can be written in a symbolic form, for example, within the propositional calculus, theorems are expressed in a natural language such as English. The same is true of proofs, which are expressed as logically organized and worded informal arguments, intended to convince readers of the truth of the statement of the theorem beyond any doubt, from which a formal symbolic proof can in principle be constructed; such arguments are easier to check than purely symbolic ones—indeed, many mathematicians would express a preference for a proof that not only demonstrates the validity of a theorem, but explains in some way why it is true. In some cases, a picture alone may be sufficient to prove a theorem; because theorems lie at the core of mathematics, they are central to its aesthetics. Theorems are described as being "trivial", or "difficult", or "deep", or "beautiful"; these subjective judgments vary not only from person to person, but with time: for example, as a proof is simplified or better understood, a theorem, once difficult may become trivial.

On the other hand, a deep theorem may be stated but its proof may involve surprising and subtle connections between disparate areas of mathematics. Fermat's Last Theorem is a well-known example of such a theorem. Logically, many theorems are of the form of an indicative conditional: if A B; such a theorem does not assert B, only that B is a necessary consequence of A. In this case A is called B the conclusion; the theorem "If n is an natural number n/2 is a natural number" is a typical example in which the hypothesis is "n is an natural number" and the conclusion is "n/2 is a natural number". To be proved, a theorem must be expressible as a formal statement. Theorems are expressed in natural language rather than in a symbolic form, with the intention that the reader can produce a formal statement from the informal one, it is common in mathematics to choose a number of hypotheses within a given language and declare that the theory consists of all statements provable from these hypotheses. These hypotheses are called axioms or postulates.

The field of mathematics known as proof theory studies formal languages and the structure of proofs. Some theorems are "trivial", in the sense that they follow from definitions and other theorems in obvious ways and do not contain any surprising insights. Some, on the other hand, may be called "deep", because their proofs may be long and difficult, involve areas of mathematics superficially distinct from the statement of the theorem itself, or show surprising connections between disparate areas of mathematics. A theorem might be simple to state and yet be deep. An excellent example is Fermat's Last Theorem, there are many other examples of simple yet deep theorems in number theory and combinatorics, among other areas. Other theorems have a known proof that cannot be written down; the most prominent examples are the Kepler conjecture. Both of these theorems are only known to be true by reducing them to a computational search, verified by a computer program. Many mathematicians did not accept this form of proof, but it has become more accepted.

The mathematician Doron Zeilberger has gone so far as to claim that these are the only nontrivial results that mathematicians have proved. Many mathematical theorems can be reduced to more straightforward computation, including polynomial identities, trigonometric identities and hypergeometric identities. To establish a mathematical statement as a theorem, a proof is required, that is, a line of reasoning from axioms in the system to the given statement must be demonstrated. However, the proof is considered as separate from the theorem statement. Although more than one proof may be known for a single theorem, only one proof is required to establish the status of a statement as a theorem; the Pythagorean theorem and the law of quadratic reciprocity are contenders for the title of theorem with the greatest number of distinct proofs. Theorems in mathematics and theories in science are fundamentally different in their epistemology. A scientific theory cannot be proved.

University of Sheffield

The University of Sheffield is a public research university in Sheffield, South Yorkshire, England. It received its royal charter in 1905 as successor to the University College of Sheffield, established in 1897 by the merger of Sheffield Medical School, Firth College and Sheffield Technical School. Sheffield is a multi-campus university predominantly over two campus areas: the Western Bank and the St George's; the university is organised into five academic faculties composed of multiple departments. It had 20,005 undergraduate and 8,710 postgraduate students in 2016/17; the annual income of the institution for 2017–18 was £691.8 million of which £197.5 million was from research grants and contracts, with an expenditure of £636.8 million. Sheffield ranks among the top 10 of UK universities for research grant funding. Sheffield was placed 75th worldwide and 13th in the UK according to QS World University Rankings and 106th worldwide and 12th in the UK according to Times Higher Education World University Rankings.

It was ranked 12th in the UK amongst multi-faculty institutions for the quality of its research and for its Research Power in the 2014 Research Excellence Framework. In 2011, Sheffield was named'University of the Year' in the Times Higher Education awards; the Times Higher Education Student Experience Survey 2014 ranked the University of Sheffield 1st for student experience, social life, university facilities and accommodation, among other categories. It is one of the original red brick universities, a member of the Russell Group of research-intensive universities, the Worldwide Universities Network, the N8 Group of the eight most research intensive universities in Northern England and the White Rose University Consortium. There are eight Nobel laureates affiliated with Sheffield and six of them are the alumni or former long-term staff of the university; the University of Sheffield was formed by the merger of three colleges. The Sheffield School of Medicine was founded in 1828, followed in 1879 by the opening of Firth College, which developed out of the Cambridge University Extension Movement scheme, by Mark Firth, a steel manufacturer, to teach arts and science subjects.

Firth College helped to fund the opening of the Sheffield Technical School in 1884 to teach applied science, the only major faculty the existing colleges did not cover. The Sheffield Technical School was founded because of local concern about the need for technical training steelmaking in Sheffield, the school moved to St George's Square in 1886; the three institutions merged in 1897 to form the University College of Sheffield by Royal Charter. Sheffield was the only large city in England without a university. Steelworkers, coal miners, factory workers and the people of Sheffield donated over £50,000 in 1904 to help found the University of Sheffield, it was envisaged that the University College would join Manchester and Leeds as the fourth member of the federal Victoria University. However, the Victoria University began to split up as independent universities before this could happen and so the University College of Sheffield received its own Royal Charter on 31 May 1905 and became the University of Sheffield.

In July 1905, Firth Court on Western Bank was opened by King Edward Queen Alexandra. St George's Square remained the centre of departments of Applied Science, the departments of Arts and Science moved to Western Bank. Sheffield is one of the six red brick universities, the civic universities founded in the major industrial cities of England. In 1905, there were 114 full-time students, the first Hall of Residence and library had been established by then; the number of students increased to a short-lived peak of 1,000 in 1919. During the First World War, some of the academic subjects and courses were replaced by teaching of munitions making and medical appliances production. Rather than from a single centre, the university has expanded since the 1920s from two ends, the Firth Court on Western Bank and the Sir Frederick Mappin Building on the St George's site. In 1943, the University Grants Committee announced that universities in the UK should look forward to expansion in the years after the Second World War.

Sheffield predicted a 50% increase in student population but the university was unprepared for such growth. There was pressure on the university to expand since the student numbers had increased from around 1,000 to 3,000 by 1946; the university announced proposals for development in 1947, which emphasised the need for new departments, medical school, administration building, halls of residence, as well as the completion of the Western Bank Quadrangles and the extension of the Students’ Union. The university grew until the 1950s and 1960s when it began to expand rapidly. Many new buildings were built and older houses were brought into academic use. Student numbers increased to their present levels of just under 26,000. At the same time in the 1950s, the university was expanding at other sites, including the St Georges area. From the 1960s, many more buildings have been constructed or extended, including the Union of Students and St George's Library; the campus master plan proposed in the 1940s was completed by the 1970s, the university required a new development plan.

The 1980s saw the opening of many new buildings and centres, such as the multi-purpose Octagon Centre and the Sir Henry Stephenson Building. The university's teaching hospital, Northern General Hospital, was extended. In 1987 the University began to collaborate with its once would-be partners of the Victoria University by co-founding the Northern Consortium.

Simulation

A simulation is an approximate imitation of the operation of a process or system. This model is a well-defined description of the simulated subject, represents its key characteristics, such as its behaviour and abstract or physical properties; the model represents the system itself. Simulation is used in many contexts, such as simulation of technology for performance optimization, safety engineering, training and video games. Computer experiments are used to study simulation models. Simulation is used with scientific modelling of natural systems or human systems to gain insight into their functioning, as in economics. Simulation can be used to show the eventual real effects of alternative conditions and courses of action. Simulation is used when the real system cannot be engaged, because it may not be accessible, or it may be dangerous or unacceptable to engage, or it is being designed but not yet built, or it may not exist. Key issues in simulation include the acquisition of valid source information about the relevant selection of key characteristics and behaviours, the use of simplifying approximations and assumptions within the simulation, fidelity and validity of the simulation outcomes.

Procedures and protocols for model verification and validation are an ongoing field of academic study, refinement and development in simulations technology or practice in the field of computer simulation. Simulations used in different fields developed independently, but 20th-century studies of systems theory and cybernetics combined with spreading use of computers across all those fields have led to some unification and a more systematic view of the concept. Physical simulation refers to simulation in which physical objects are substituted for the real thing; these physical objects are chosen because they are smaller or cheaper than the actual object or system. Interactive simulation is a special kind of physical simulation referred to as a human in the loop simulation, in which physical simulations include human operators, such as in a flight simulator, sailing simulator, or a driving simulator. Continuous simulation is a simulation where time evolves continuously based on numerical integration of Differential Equations.

Discrete Event Simulation is a simulation where time evolves along events that represent critical moments, while the values of the variables are not relevant between two of them or result trivial to be computed in case of necessityStochastic Simulation is a simulation where some variable or process is regulated by stochastic factors and estimated based on Monte Carlo techniques using pseudo-random numbers, so replicated runs from same boundary conditions are expected to produce different results within a specific confidence band Deterministic Simulation is a simulation where the variable is regulated by deterministic algorithms, so replicated runs from same boundary conditions produce always identical results. Hybrid Simulation corresponds to a mix between Continuous and Discrete Event Simulation and results in integrating numerically the differential equations between two sequential events to reduce the number of discontinuities Stand Alone Simulation is a Simulation running on a single workstation by itself.

Distributed Simulation is operating over distributed computers in order to guarantee access from/to different resources. Modeling & Simulation as a Service where Simulation is accessed as a Service over the web. Modeling, interoperable Simulation and Serious Games where Serious Games Approaches are integrated with Interoperable Simulation. Simulation Fidelity is used to describe the accuracy of a simulation and how it imitates the real-life counterpart. Fidelity is broadly classified as 1 of 3 categories: low and high. Specific descriptions of fidelity levels are subject to interpretation but the following generalization can be made: Low – the minimum simulation required for a system to respond to accept inputs and provide outputs Medium – responds automatically to stimuli, with limited accuracy High – nearly indistinguishable or as close as possible to the real systemHuman in the loop simulations can include a computer simulation as a so-called synthetic environment. Simulation in failure analysis refers to simulation in which we create environment/conditions to identify the cause of equipment failure.

This was the fastest method to identify the failure cause. A computer simulation is an attempt to model a real-life or hypothetical situation on a computer so that it can be studied to see how the system works. By changing variables in the simulation, predictions may be made about the behaviour of the system, it is a tool to investigate the behaviour of the system under study. Computer simulation has become a useful part of modeling many natural systems in physics and biology, human systems in economics and social science as well as in engineering to gain insight into the operation of those systems

Algebra

Algebra is one of the broad parts of mathematics, together with number theory and analysis. In its most general form, algebra is the study of mathematical symbols and the rules for manipulating these symbols, it includes everything from elementary equation solving to the study of abstractions such as groups and fields. The more basic parts of algebra are called elementary algebra. Elementary algebra is considered to be essential for any study of mathematics, science, or engineering, as well as such applications as medicine and economics. Abstract algebra is a major area in advanced mathematics, studied by professional mathematicians. Elementary algebra differs from arithmetic in the use of abstractions, such as using letters to stand for numbers that are either unknown or allowed to take on many values. For example, in x + 2 = 5 the letter x is unknown, but the law of inverses can be used to discover its value: x = 3. In E = mc2, the letters E and m are variables, the letter c is a constant, the speed of light in a vacuum.

Algebra gives methods for writing formulas and solving equations that are much clearer and easier than the older method of writing everything out in words. The word algebra is used in certain specialized ways. A special kind of mathematical object in abstract algebra is called an "algebra", the word is used, for example, in the phrases linear algebra and algebraic topology. A mathematician who does research in algebra is called an algebraist; the word algebra comes from the Arabic الجبر from the title of the book Ilm al-jabr wa'l-muḳābala by the Persian mathematician and astronomer al-Khwarizmi. The word entered the English language during the fifteenth century, from either Spanish, Italian, or Medieval Latin, it referred to the surgical procedure of setting broken or dislocated bones. The mathematical meaning was first recorded in the sixteenth century; the word "algebra" has several related meanings as a single word or with qualifiers. As a single word without an article, "algebra" names a broad part of mathematics.

As a single word with an article or in plural, "an algebra" or "algebras" denotes a specific mathematical structure, whose precise definition depends on the author. The structure has an addition, a scalar multiplication; when some authors use the term "algebra", they make a subset of the following additional assumptions: associative, unital, and/or finite-dimensional. In universal algebra, the word "algebra" refers to a generalization of the above concept, which allows for n-ary operations. With a qualifier, there is the same distinction: Without an article, it means a part of algebra, such as linear algebra, elementary algebra, or abstract algebra. With an article, it means an instance of some abstract structure, like a Lie algebra, an associative algebra, or a vertex operator algebra. Sometimes both meanings exist for the same qualifier, as in the sentence: Commutative algebra is the study of commutative rings, which are commutative algebras over the integers. Algebra began with letters standing for numbers.

This allowed proofs of properties. For example, in the quadratic equation a x 2 + b x + c = 0, a, b, c can be any numbers whatsoever, the quadratic formula can be used to and find the values of the unknown quantity x which satisfy the equation; that is to say. And in current teaching, the study of algebra starts with the solving of equations such as the quadratic equation above. More general questions, such as "does an equation have a solution?", "how many solutions does an equation have?", "what can be said about the nature of the solutions?" are considered. These questions led extending algebra to non-numerical objects, such as permutations, vectors and polynomials; the structural properties of these non-numerical objects were abstracted into algebraic structures such as groups and fields. Before the 16th century, mathematics was divided into only two subfields and geometry. Though some methods, developed much earlier, may be considered nowadays as algebra, the emergence of algebra and, soon thereafter, of infinitesimal calculus as subfields of mathematics only dates from the 16th or 17th century.

From the second half of 19th century on, many new fields of mathematics appeared, most of which made use of both arithmetic and geometry, all of which used algebra. Today, algebra has grown until it includes many branches of mathematics, as can be seen in the Mathematics Subject Classification where none of the first level areas is called algebra. Today algebra in

Game theory

Game theory is the study of mathematical models of strategic interaction between rational decision-makers. It has applications in all fields of social science, as well as in computer science, it addressed zero-sum games, in which one person's gains result in losses for the other participants. Today, game theory applies to a wide range of behavioral relations, is now an umbrella term for the science of logical decision making in humans and computers. Modern game theory began with the idea regarding the existence of mixed-strategy equilibria in two-person zero-sum games and its proof by John von Neumann. Von Neumann's original proof used the Brouwer fixed-point theorem on continuous mappings into compact convex sets, which became a standard method in game theory and mathematical economics, his paper was followed by the 1944 book Theory of Games and Economic Behavior, co-written with Oskar Morgenstern, which considered cooperative games of several players. The second edition of this book provided an axiomatic theory of expected utility, which allowed mathematical statisticians and economists to treat decision-making under uncertainty.

Game theory was developed extensively in the 1950s by many scholars. It was explicitly applied to biology in the 1970s, although similar developments go back at least as far as the 1930s. Game theory has been recognized as an important tool in many fields; as of 2014, with the Nobel Memorial Prize in Economic Sciences going to game theorist Jean Tirole, eleven game theorists have won the economics Nobel Prize. John Maynard Smith was awarded the Crafoord Prize for his application of game theory to biology. Early discussions of examples of two-person games occurred long before the rise of modern, mathematical game theory; the first known discussion of game theory occurred in a letter written by Charles Waldegrave, an active Jacobite, uncle to James Waldegrave, a British diplomat, in 1713. In this letter, Waldegrave provides a minimax mixed strategy solution to a two-person version of the card game le Her, the problem is now known as Waldegrave problem. In his 1838 Recherches sur les principes mathématiques de la théorie des richesses, Antoine Augustin Cournot considered a duopoly and presents a solution, a restricted version of the Nash equilibrium.

In 1913, Ernst Zermelo published Über eine Anwendung der Mengenlehre auf die Theorie des Schachspiels. It proved that the optimal chess strategy is determined; this paved the way for more general theorems. In 1938, the Danish mathematical economist Frederik Zeuthen proved that the mathematical model had a winning strategy by using Brouwer's fixed point theorem. In his 1938 book Applications aux Jeux de Hasard and earlier notes, Émile Borel proved a minimax theorem for two-person zero-sum matrix games only when the pay-off matrix was symmetric. Borel conjectured that non-existence of mixed-strategy equilibria in two-person zero-sum games would occur, a conjecture, proved false. Game theory did not exist as a unique field until John von Neumann published the paper On the Theory of Games of Strategy in 1928. Von Neumann's original proof used Brouwer's fixed-point theorem on continuous mappings into compact convex sets, which became a standard method in game theory and mathematical economics, his paper was followed by his 1944 book Theory of Games and Economic Behavior co-authored with Oskar Morgenstern.

The second edition of this book provided an axiomatic theory of utility, which reincarnated Daniel Bernoulli's old theory of utility as an independent discipline. Von Neumann's work in game theory culminated in this 1944 book; this foundational work contains the method for finding mutually consistent solutions for two-person zero-sum games. During the following time period, work on game theory was focused on cooperative game theory, which analyzes optimal strategies for groups of individuals, presuming that they can enforce agreements between them about proper strategies. In 1950, the first mathematical discussion of the prisoner's dilemma appeared, an experiment was undertaken by notable mathematicians Merrill M. Flood and Melvin Dresher, as part of the RAND Corporation's investigations into game theory. RAND pursued the studies because of possible applications to global nuclear strategy. Around this same time, John Nash developed a criterion for mutual consistency of players' strategies, known as Nash equilibrium, applicable to a wider variety of games than the criterion proposed by von Neumann and Morgenstern.

Nash proved that every n-player, non-zero-sum non-cooperative game has what is now known as a Nash equilibrium. Game theory experienced a flurry of activity in the 1950s, during which time the concepts of the core, the extensive form game, fictitious play, repeated games, the Shapley value were developed. In addition, the first applications of game theory to philosophy and political science occurred during this time. In 1979 Robert Axelrod tried setting up computer programs as players and found that in tournaments between them the winner was a simple "tit-for-tat" program that cooperates on the first step on subsequent steps just does whatever its opponent did on the previous step; the same winner was often obtained by natural selection. In 1965, Reinhard Selten introduced his solution concept of subgame perfect equilibria, which further refined the Nash equilibrium. In 1994 Nash and Harsanyi became Economics Nobel Laureates for their contributi

Evolution and the Theory of Games

Evolution and the Theory of Games is a book by the British evolutionary biologist John Maynard Smith on evolutionary game theory. The book was published in December 1982 by Cambridge University Press. In the book, John Maynard Smith summarises work on evolutionary game theory that had developed in the 1970s, to which he made several important contributions; the book is noted for being well written and not overly mathematically challenging. The main contribution to be had from this book is the introduction of the Evolutionarily Stable Strategy, or ESS, which states that for a set of behaviours to be conserved over evolutionary time, they must be the most profitable avenue of action when common, so that no alternative behaviour can invade. So, for instance, suppose that in a population of frogs, males fight to the death over breeding ponds; this would be an ESS. A more scenario is one where fighting to the death is not an ESS because a frog might arise that will stop fighting if it realises that it is going to lose.

This frog would reap the benefits of fighting, but not the ultimate cost. Hence, fighting to the death would be invaded by a mutation that causes this sort of "informed fighting." Much complexity can be built from this, Maynard Smith is outstanding at explaining in clear prose and with simple math. Evolutionary biology Cambridge University Press

Evolutionary game theory

Evolutionary game theory is the application of game theory to evolving populations in biology. It defines a framework of contests and analytics into which Darwinian competition can be modelled, it originated in 1973 with John Maynard Smith and George R. Price's formalisation of contests, analysed as strategies, the mathematical criteria that can be used to predict the results of competing strategies. Evolutionary game theory differs from classical game theory in focusing more on the dynamics of strategy change; this is influenced by the frequency of the competing strategies in the population. Evolutionary game theory has helped to explain the basis of altruistic behaviours in Darwinian evolution, it has in turn become of interest to economists, sociologists and philosophers. Classical non-cooperative game theory was conceived by John von Neumann to determine optimal strategies in competitions between adversaries. A contest involves players. Games can be a single round or repetitive; the approach a player takes in making his moves constitutes his strategy.

Rules govern the outcome for the moves taken by the players, outcomes produce payoffs for the players. Classical theory requires the players to make rational choices; each player must consider the strategic analysis that his opponents are making to make his own choice of moves. Evolutionary game theory started with the problem of how to explain ritualized animal behaviour in a conflict situation; the leading ethologists Niko Tinbergen and Konrad Lorenz proposed that such behaviour exists for the benefit of the species. John Maynard Smith considered that incompatible with Darwinian thought, where selection occurs at an individual level, so self-interest is rewarded while seeking the common good is not. Maynard Smith, a mathematical biologist, turned to game theory as suggested by George Price, though Richard Lewontin's attempts to use the theory had failed. Maynard Smith realised that an evolutionary version of game theory does not require players to act rationally —– only that they have a strategy.

The results of a game shows how good that strategy was, just as evolution tests alternative strategies for the ability to survive and reproduce. In biology, strategies are genetically inherited traits that control an individual's action, analogous with computer programs; the success of a strategy is determined by how good the strategy is in the presence of competing strategies, of the frequency with which those strategies are used. Maynard Smith described his work in the Theory of Games. Participants aim to produce as many replicas of themselves as they can, the payoff is in units of fitness, it is always a multi-player game with many competitors. Rules include replicator dynamics, in other words how the fitter players will spawn more replicas of themselves into the population and how the less fit will be culled, in a replicator equation; the replicator dynamics models heredity but not mutation, assumes asexual reproduction for the sake of simplicity. Games are run repetitively with no terminating conditions.

Results include the dynamics of changes in the population, the success of strategies, any equilibrium states reached. Unlike in classical game theory, players do not choose their strategy and cannot change it: they are born with a strategy and their offspring inherit that same strategy. Evolutionary game theory encompasses Darwinian evolution, including competition, natural selection, heredity. Evolutionary game theory has contributed to the understanding of group selection, sexual selection, parental care, co-evolution, ecological dynamics. Many counter-intuitive situations in these areas have been put on a firm mathematical footing by the use of these models; the common way to study the evolutionary dynamics in games is through replicator equations. These show the growth rate of the proportion of organisms using a certain strategy and that rate is equal to the difference between the average payoff of that strategy and the average payoff of the population as a whole. Continuous replicator equations assume infinite populations, continuous time, complete mixing and that strategies breed true.

The attractors of the equations are equivalent with evolutionarily stable states. A strategy which can survive all "mutant" strategies is considered evolutionarily stable. In the context of animal behavior, this means such strategies are programmed and influenced by genetics, thus making any player or organism's strategy determined by these biological factors. Evolutionary games are mathematical objects with different rules and mathematical behaviours; each "game" represents different problems that organisms have to deal with, the strategies they might adopt to survive and reproduce. Evolutionary games are given colourful names and cover stories which describe the general situation of a particular game. Representative games include hawk-dove, war of attrition, stag hunt, producer-scrounger, tragedy of the commons, prisoner's dilemma. Strategies for these games include Hawk, Bourgeois, Defector and Retaliator; the various strategies compete under the particular game's rules, the mathematics are used to determine the results and behaviours.

The first game that Maynard Smith analysed is the classic Hawk Dove game. It was conceived to analyse a contest over a shareable resource; the contestants can be either Dove. These are two subtypes or