1.
Calipers
–
A caliper is a device used to measure the distance between two opposite sides of an object. A caliper can be as simple as a compass with inward or outward-facing points. The tips of the caliper are adjusted to fit across the points to be measured, the caliper is then removed and it is used in many fields such as mechanical engineering, metalworking, forestry, woodworking, science and medicine. A plurale tantum sense of the word calipers coexists in natural usage with the regular noun sense of caliper, also existing colloquially but not in formal usage is referring to a vernier caliper as a vernier or a pair of verniers. In imprecise colloquial usage, some extend this even to dial calipers. In machine-shop usage, the caliper is often used in contradistinction to micrometer. In this usage, caliper implies only the factor of the vernier or dial caliper. The earliest caliper has been found in the Greek Giglio wreck near the Italian coast, the ship find dates to the 6th century BC. The wooden piece already featured a fixed and a movable jaw, although rare finds, caliper remained in use by the Greeks and Romans. A bronze caliper, dating from 9 AD, was used for minute measurements during the Chinese Xin dynasty, the caliper had an inscription stating that it was made on a gui-you day at new moon of the first month of the first year of the Shijian guo period. The calipers included a slot and pin and graduated in inches, the modern vernier caliper, reading to thousandths of an inch, was invented by American Joseph R. Brown in 1851. It was the first practical tool for exact measurements that could be sold at a price within the reach of ordinary machinists, the inside calipers are used to measure the internal size of an object. The upper caliper in the image requires manual adjustment prior to fitting, fine setting of this caliper type is performed by tapping the caliper legs lightly on a handy surface until they will almost pass over the object. A light push against the resistance of the pivot screw then spreads the legs to the correct dimension and provides the required. The lower caliper in the image has a screw that permits it to be carefully adjusted without removal of the tool from the workpiece. Outside calipers are used to measure the size of an object. The same observations and technique apply to this type of caliper, with some understanding of their limitations and usage, these instruments can provide a high degree of accuracy and repeatability. They are especially useful when measuring over very large distances, consider if the calipers are used to measure a large diameter pipe, a vernier caliper does not have the depth capacity to straddle this large diameter while at the same time reach the outermost points of the pipes diameter
Calipers
–
Caliper with graduated bow 0–10 mm
Calipers
–
Two inside calipers
Calipers
–
Three outside calipers.
Calipers
–
A pair of dividers
2.
Raphael
–
Raffaello Sanzio da Urbino, known as Raphael, was an Italian painter and architect of the High Renaissance. His work is admired for its clarity of form, ease of composition, together with Michelangelo and Leonardo da Vinci, he forms the traditional trinity of great masters of that period. Raphael was enormously productive, running a large workshop and, despite his death at 37. Many of his works are found in the Vatican Palace, where the frescoed Raphael Rooms were the central, the best known work is The School of Athens in the Vatican Stanza della Segnatura. After his early years in Rome much of his work was executed by his workshop from his drawings and he was extremely influential in his lifetime, though outside Rome his work was mostly known from his collaborative printmaking. Raphael was born in the small but artistically significant central Italian city of Urbino in the Marche region and his poem to Federico shows him as keen to show awareness of the most advanced North Italian painters, and Early Netherlandish artists as well. In the very court of Urbino he was probably more integrated into the central circle of the ruling family than most court painters. Under them, the court continued as a centre for literary culture, growing up in the circle of this small court gave Raphael the excellent manners and social skills stressed by Vasari. Castiglione moved to Urbino in 1504, when Raphael was no longer based there but frequently visited, Raphael mixed easily in the highest circles throughout his life, one of the factors that tended to give a misleading impression of effortlessness to his career. He did not receive a humanistic education however, it is unclear how easily he read Latin. His mother Màgia died in 1491 when Raphael was eight, followed on August 1,1494 by his father, Raphael was thus orphaned at eleven, his formal guardian became his only paternal uncle Bartolomeo, a priest, who subsequently engaged in litigation with his stepmother. He probably continued to live with his stepmother when not staying as an apprentice with a master and he had already shown talent, according to Vasari, who says that Raphael had been a great help to his father. A self-portrait drawing from his teenage years shows his precocity and his fathers workshop continued and, probably together with his stepmother, Raphael evidently played a part in managing it from a very early age. In Urbino, he came into contact with the works of Paolo Uccello, previously the court painter, and Luca Signorelli, according to Vasari, his father placed him in the workshop of the Umbrian master Pietro Perugino as an apprentice despite the tears of his mother. The evidence of an apprenticeship comes only from Vasari and another source, an alternative theory is that he received at least some training from Timoteo Viti, who acted as court painter in Urbino from 1495. An excess of resin in the varnish often causes cracking of areas of paint in the works of both masters, the Perugino workshop was active in both Perugia and Florence, perhaps maintaining two permanent branches. Raphael is described as a master, that is to say fully trained and his first documented work was the Baronci altarpiece for the church of Saint Nicholas of Tolentino in Città di Castello, a town halfway between Perugia and Urbino. Evangelista da Pian di Meleto, who had worked for his father, was named in the commission
Raphael
–
Presumed Portrait of Raphael
Raphael
–
Self-portrait with a friend (1518 circa), Louvre, Paris
Raphael
–
Giovanni Santi, Raphael's father; Christ supported by two angels, c.1490
Raphael
–
The Mond Crucifixion, 1502–3, very much in the style of Perugino
3.
The School of Athens
–
The School of Athens is one of the most famous frescoes by the Italian Renaissance artist Raphael. It was painted between 1509 and 1511 as a part of Raphaels commission to decorate the rooms now known as the Stanze di Raffaello, the picture has long been seen as Raphaels masterpiece and the perfect embodiment of the classical spirit of the Renaissance. The School of Athens is one of a group of four main frescoes on the walls of the Stanza that depict distinct branches of knowledge, accordingly, the figures on the walls below exemplify Philosophy, Poetry, Theology, and Law. The traditional title is not Raphaels, indeed, Plato and Aristotle appear to be the central figures in the scene. However, all the philosophers depicted sought knowledge of first causes, many lived before Plato and Aristotle, and hardly a third were Athenians. The architecture contains Roman elements, but the general semi-circular setting having Plato, compounding the problem, Raphael had to invent a system of iconography to allude to various figures for whom there were no traditional visual types. For example, while the Socrates figure is immediately recognizable from Classical busts, aside from the identities of the figures depicted, many aspects of the fresco have been variously interpreted, but few such interpretations are unanimously accepted among scholars. The popular idea that the gestures of Plato and Aristotle are kinds of pointing is very likely. Aristotle, with his four-elements theory, held that all change on Earth was owing to motions of the heavens, in the painting Aristotle carries his Ethics, which he denied could be reduced to a mathematical science. Finally, according to Vasari, the scene includes Raphael himself, however, as Heinrich Wölfflin observed, it is quite wrong to attempt interpretations of the School of Athens as an esoteric treatise. The all-important thing was the motive which expressed a physical or spiritual state. An interpretation of the fresco relating to hidden symmetries of the figures, the identities of some of the philosophers in the picture, such as Plato or Aristotle, are certain. Beyond that, identifications of Raphaels figures have always been hypothetical, to complicate matters, beginning from Vasaris efforts, some have received multiple identifications, not only as ancients but also as figures contemporary with Raphael. Luitpold Dussler counts among those who can be identified with certainty, Plato, Aristotle, Socrates, Pythagoras, Euclid, Ptolemy, Zoroaster, Raphael, Sodoma. Other identifications he holds to be more or less speculative, both figures hold modern, bound copies of their books in their left hands, while gesturing with their right. Plato holds Timaeus, Aristotle his Nicomachean Ethics, Plato is depicted as old, grey, wise-looking, and bare-foot. By contrast Aristotle, slightly ahead of him, is in manhood, handsome, well-shod and dressed with gold. Many interpret the painting to show a divergence of the two philosophical schools, Plato argues a sense of timelessness whilst Aristotle looks into the physicality of life and the present realm
The School of Athens
–
The School of Athens
The School of Athens
–
An elder Plato walks alongside Aristotle.
The School of Athens
–
Architecture
The School of Athens
–
Epicurus
4.
Number
–
A number is a mathematical object used to count, measure, and label. The original examples are the natural numbers 1,2,3, a notational symbol that represents a number is called a numeral. In addition to their use in counting and measuring, numerals are used for labels, for ordering. In common usage, number may refer to a symbol, a word, calculations with numbers are done with arithmetical operations, the most familiar being addition, subtraction, multiplication, division, and exponentiation. Their study or usage is called arithmetic, the same term may also refer to number theory, the study of the properties of numbers. Besides their practical uses, numbers have cultural significance throughout the world, for example, in Western society the number 13 is regarded as unlucky, and a million may signify a lot. Though it is now regarded as pseudoscience, numerology, the belief in a significance of numbers, permeated ancient. Numerology heavily influenced the development of Greek mathematics, stimulating the investigation of problems in number theory which are still of interest today. During the 19th century, mathematicians began to develop many different abstractions which share certain properties of numbers, among the first were the hypercomplex numbers, which consist of various extensions or modifications of the complex number system. Numbers should be distinguished from numerals, the used to represent numbers. Boyer showed that Egyptians created the first ciphered numeral system, Greeks followed by mapping their counting numbers onto Ionian and Doric alphabets. The number five can be represented by digit 5 or by the Roman numeral Ⅴ, notations used to represent numbers are discussed in the article numeral systems. The Roman numerals require extra symbols for larger numbers, different types of numbers have many different uses. Numbers can be classified into sets, called number systems, such as the natural numbers, the same number can be written in many different ways. For different methods of expressing numbers with symbols, such as the Roman numerals, each of these number systems may be considered as a proper subset of the next one. This is expressed, symbolically, by writing N ⊂ Z ⊂ Q ⊂ R ⊂ C, the most familiar numbers are the natural numbers,1,2,3, and so on. Traditionally, the sequence of numbers started with 1 However, in the 19th century, set theorists. Today, different mathematicians use the term to both sets, including 0 or not
Number
–
The number 605 in Khmer numerals, from an inscription from 683 AD. An early use of zero as a decimal figure.
Number
–
Subsets of the complex numbers.
5.
Conjecture
–
In mathematics, a conjecture is a conclusion or proposition based on incomplete information, for which no proof has been found. Conjectures such as the Riemann hypothesis or Fermats Last Theorem have shaped much of history as new areas of mathematics are developed in order to prove them. In number theory, Fermats Last Theorem states that no three positive integers a, b, and c can satisfy the equation an + bn = cn for any value of n greater than two. This theorem was first conjectured by Pierre de Fermat in 1637 in the margin of a copy of Arithmetica where he claimed he had a proof that was too large to fit in the margin. The first successful proof was released in 1994 by Andrew Wiles, the unsolved problem stimulated the development of algebraic number theory in the 19th century and the proof of the modularity theorem in the 20th century. It is among the most notable theorems in the history of mathematics, two regions are called adjacent if they share a common boundary that is not a corner, where corners are the points shared by three or more regions. For example, in the map of the United States of America, Utah and Arizona are adjacent, but Utah and New Mexico, möbius mentioned the problem in his lectures as early as 1840. The conjecture was first proposed on October 23,1852 when Francis Guthrie, while trying to color the map of counties of England, a number of false proofs and false counterexamples have appeared since the first statement of the four color theorem in 1852. The four color theorem was proven in 1976 by Kenneth Appel and it was the first major theorem to be proved using a computer. Appel and Hakens approach started by showing that there is a set of 1,936 maps. Appel and Haken used a computer program to confirm that each of these maps had this property. Additionally, any map that could potentially be a counterexample must have a portion that looks like one of these 1,936 maps, showing this required hundreds of pages of hand analysis. Appel and Haken concluded that no smallest counterexamples exists because any must contain, yet do not contain and this contradiction means there are no counterexamples at all and that the theorem is therefore true. Initially, their proof was not accepted by all mathematicians because the proof was infeasible for a human to check by hand. Since then the proof has gained acceptance, although doubts remain. The Hauptvermutung of geometric topology is the conjecture that any two triangulations of a triangulable space have a refinement, a single triangulation that is a subdivision of both of them. It was originally formulated in 1908, by Steinitz and Tietze and this conjecture is now known to be false. The non-manifold version was disproved by John Milnor in 1961 using Reidemeister torsion, the manifold version is true in dimensions m ≤3
Conjecture
–
The real part (red) and imaginary part (blue) of the Riemann zeta function along the critical line Re(s) = 1/2. The first non-trivial zeros can be seen at Im(s) = ±14.135, ±21.022 and ±25.011. The Riemann hypothesis, a famous conjecture, says that all non-trivial zeros of the zeta function lie along the critical line.
6.
Logic
–
Logic, originally meaning the word or what is spoken, is generally held to consist of the systematic study of the form of arguments. A valid argument is one where there is a relation of logical support between the assumptions of the argument and its conclusion. Historically, logic has been studied in philosophy and mathematics, and recently logic has been studied in science, linguistics, psychology. The concept of form is central to logic. The validity of an argument is determined by its logical form, traditional Aristotelian syllogistic logic and modern symbolic logic are examples of formal logic. Informal logic is the study of natural language arguments, the study of fallacies is an important branch of informal logic. Since much informal argument is not strictly speaking deductive, on some conceptions of logic, formal logic is the study of inference with purely formal content. An inference possesses a purely formal content if it can be expressed as an application of a wholly abstract rule, that is. The works of Aristotle contain the earliest known study of logic. Modern formal logic follows and expands on Aristotle, in many definitions of logic, logical inference and inference with purely formal content are the same. This does not render the notion of informal logic vacuous, because no formal logic captures all of the nuances of natural language, Symbolic logic is the study of symbolic abstractions that capture the formal features of logical inference. Symbolic logic is divided into two main branches, propositional logic and predicate logic. Mathematical logic is an extension of logic into other areas, in particular to the study of model theory, proof theory, set theory. Logic is generally considered formal when it analyzes and represents the form of any valid argument type, the form of an argument is displayed by representing its sentences in the formal grammar and symbolism of a logical language to make its content usable in formal inference. Simply put, formalising simply means translating English sentences into the language of logic and this is called showing the logical form of the argument. It is necessary because indicative sentences of ordinary language show a variety of form. Second, certain parts of the sentence must be replaced with schematic letters, thus, for example, the expression all Ps are Qs shows the logical form common to the sentences all men are mortals, all cats are carnivores, all Greeks are philosophers, and so on. The schema can further be condensed into the formula A, where the letter A indicates the judgement all - are -, the importance of form was recognised from ancient times
Logic
–
Aristotle, 384–322 BCE.
7.
Measurement
–
Measurement is the assignment of a number to a characteristic of an object or event, which can be compared with other objects or events. The scope and application of a measurement is dependent on the context, however, in other fields such as statistics as well as the social and behavioral sciences, measurements can have multiple levels, which would include nominal, ordinal, interval, and ratio scales. Measurement is a cornerstone of trade, science, technology, historically, many measurement systems existed for the varied fields of human existence to facilitate comparisons in these fields. Often these were achieved by local agreements between trading partners or collaborators, since the 18th century, developments progressed towards unifying, widely accepted standards that resulted in the modern International System of Units. This system reduces all physical measurements to a combination of seven base units. The science of measurement is pursued in the field of metrology, the measurement of a property may be categorized by the following criteria, type, magnitude, unit, and uncertainty. They enable unambiguous comparisons between measurements, the type or level of measurement is a taxonomy for the methodological character of a comparison. For example, two states of a property may be compared by ratio, difference, or ordinal preference, the type is commonly not explicitly expressed, but implicit in the definition of a measurement procedure. The magnitude is the value of the characterization, usually obtained with a suitably chosen measuring instrument. A unit assigns a mathematical weighting factor to the magnitude that is derived as a ratio to the property of a used as standard or a natural physical quantity. An uncertainty represents the random and systemic errors of the measurement procedure, errors are evaluated by methodically repeating measurements and considering the accuracy and precision of the measuring instrument. Measurements most commonly use the International System of Units as a comparison framework, the system defines seven fundamental units, kilogram, metre, candela, second, ampere, kelvin, and mole. Instead, the measurement unit can only ever change through increased accuracy in determining the value of the constant it is tied to and this directly influenced the Michelson–Morley experiment, Michelson and Morley cite Peirce, and improve on his method. With the exception of a few fundamental quantum constants, units of measurement are derived from historical agreements, nothing inherent in nature dictates that an inch has to be a certain length, nor that a mile is a better measure of distance than a kilometre. Over the course of history, however, first for convenience and then for necessity. Laws regulating measurement were originally developed to prevent fraud in commerce.9144 metres, in the United States, the National Institute of Standards and Technology, a division of the United States Department of Commerce, regulates commercial measurements. Before SI units were adopted around the world, the British systems of English units and later imperial units were used in Britain, the Commonwealth. The system came to be known as U. S. customary units in the United States and is still in use there and in a few Caribbean countries. S
Measurement
–
A typical tape measure with both metric and US units and two US pennies for comparison
Measurement
–
A baby bottle that measures in three measurement systems, Imperial (U.K.), U.S. customary, and metric.
Measurement
–
Four measuring devices having metric calibrations
8.
Shape
–
A shape is the form of an object or its external boundary, outline, or external surface, as opposed to other properties such as color, texture, or material composition. Psychologists have theorized that humans mentally break down images into simple geometric shapes called geons, examples of geons include cones and spheres. Some simple shapes can be put into broad categories, for instance, polygons are classified according to their number of edges as triangles, quadrilaterals, pentagons, etc. Each of these is divided into categories, triangles can be equilateral, isosceles, obtuse, acute, scalene, etc. while quadrilaterals can be rectangles, rhombi, trapezoids, squares. Other common shapes are points, lines, planes, and conic sections such as ellipses, circles, among the most common 3-dimensional shapes are polyhedra, which are shapes with flat faces, ellipsoids, which are egg-shaped or sphere-shaped objects, cylinders, and cones. If an object falls into one of these categories exactly or even approximately, thus, we say that the shape of a manhole cover is a disk, because it is approximately the same geometric object as an actual geometric disk. Similarity, Two objects are similar if one can be transformed into the other by a scaling, together with a sequence of rotations, translations. Isotopy, Two objects are isotopic if one can be transformed into the other by a sequence of deformations that do not tear the object or put holes in it. Sometimes, two similar or congruent objects may be regarded as having a different shape if a reflection is required to transform one into the other. For instance, the b and d are a reflection of each other, and hence they are congruent and similar. Sometimes, only the outline or external boundary of the object is considered to determine its shape, for instance, an hollow sphere may be considered to have the same shape as a solid sphere. Procrustes analysis is used in many sciences to determine whether or not two objects have the shape, or to measure the difference between two shapes. In advanced mathematics, quasi-isometry can be used as a criterion to state that two shapes are approximately the same. Simple shapes can often be classified into basic objects such as a point, a line, a curve, a plane. However, most shapes occurring in the world are complex. Some, such as plant structures and coastlines, may be so complicated as to defy traditional mathematical description – in which case they may be analyzed by differential geometry, or as fractals. In geometry, two subsets of a Euclidean space have the shape if one can be transformed to the other by a combination of translations, rotations. In other words, the shape of a set of points is all the information that is invariant to translations, rotations
Shape
–
A variety of polygonal shapes.
9.
History of Mathematics
–
Before the modern age and the worldwide spread of knowledge, written examples of new mathematical developments have come to light only in a few locales. The most ancient mathematical texts available are Plimpton 322, the Rhind Mathematical Papyrus, All of these texts concern the so-called Pythagorean theorem, which seems to be the most ancient and widespread mathematical development after basic arithmetic and geometry. Greek mathematics greatly refined the methods and expanded the subject matter of mathematics, Chinese mathematics made early contributions, including a place value system. Islamic mathematics, in turn, developed and expanded the known to these civilizations. Many Greek and Arabic texts on mathematics were then translated into Latin, from ancient times through the Middle Ages, periods of mathematical discovery were often followed by centuries of stagnation. Beginning in Renaissance Italy in the 16th century, new mathematical developments, the origins of mathematical thought lie in the concepts of number, magnitude, and form. Modern studies of cognition have shown that these concepts are not unique to humans. Such concepts would have part of everyday life in hunter-gatherer societies. The idea of the number concept evolving gradually over time is supported by the existence of languages which preserve the distinction between one, two, and many, but not of numbers larger than two. Prehistoric artifacts discovered in Africa, dated 20,000 years old or more suggest early attempts to quantify time. The Ishango bone, found near the headwaters of the Nile river, may be more than 20,000 years old, common interpretations are that the Ishango bone shows either the earliest known demonstration of sequences of prime numbers or a six-month lunar calendar. He also writes that no attempt has been made to explain why a tally of something should exhibit multiples of two, prime numbers between 10 and 20, and some numbers that are almost multiples of 10, predynastic Egyptians of the 5th millennium BC pictorially represented geometric designs. All of the above are disputed however, and the currently oldest undisputed mathematical documents are from Babylonian, Babylonian mathematics refers to any mathematics of the peoples of Mesopotamia from the days of the early Sumerians through the Hellenistic period almost to the dawn of Christianity. The majority of Babylonian mathematical work comes from two widely separated periods, The first few hundred years of the second millennium BC, and it is named Babylonian mathematics due to the central role of Babylon as a place of study. Later under the Arab Empire, Mesopotamia, especially Baghdad, once again became an important center of study for Islamic mathematics, in contrast to the sparsity of sources in Egyptian mathematics, our knowledge of Babylonian mathematics is derived from more than 400 clay tablets unearthed since the 1850s. Written in Cuneiform script, tablets were inscribed whilst the clay was moist, Some of these appear to be graded homework. The earliest evidence of written mathematics dates back to the ancient Sumerians and they developed a complex system of metrology from 3000 BC. From around 2500 BC onwards, the Sumerians wrote multiplication tables on clay tablets and dealt with geometrical exercises, the earliest traces of the Babylonian numerals also date back to this period
History of Mathematics
–
A proof from Euclid 's Elements, widely considered the most influential textbook of all time.
History of Mathematics
–
The Babylonian mathematical tablet Plimpton 322, dated to 1800 BC.
History of Mathematics
–
Image of Problem 14 from the Moscow Mathematical Papyrus. The problem includes a diagram indicating the dimensions of the truncated pyramid.
History of Mathematics
–
One of the oldest surviving fragments of Euclid's Elements, found at Oxyrhynchus and dated to circa AD 100. The diagram accompanies Book II, Proposition 5.
10.
Greek mathematics
–
Greek mathematics, as the term is used in this article, is the mathematics written in Greek, developed from the 7th century BC to the 4th century AD around the shores of the Eastern Mediterranean. Greek mathematicians lived in cities spread over the entire Eastern Mediterranean from Italy to North Africa but were united by culture, Greek mathematics of the period following Alexander the Great is sometimes called Hellenistic mathematics. The word mathematics itself derives from the ancient Greek μάθημα, meaning subject of instruction, the study of mathematics for its own sake and the use of generalized mathematical theories and proofs is the key difference between Greek mathematics and those of preceding civilizations. The origin of Greek mathematics is not well documented, the earliest advanced civilizations in Greece and in Europe were the Minoan and later Mycenaean civilization, both of which flourished during the 2nd millennium BC. While these civilizations possessed writing and were capable of advanced engineering, including four-story palaces with drainage and beehive tombs, though no direct evidence is available, it is generally thought that the neighboring Babylonian and Egyptian civilizations had an influence on the younger Greek tradition. Historians traditionally place the beginning of Greek mathematics proper to the age of Thales of Miletus. Little is known about the life and work of Thales, so little indeed that his date of birth and death are estimated from the eclipse of 585 BC, despite this, it is generally agreed that Thales is the first of the seven wise men of Greece. The two earliest mathematical theorems, Thales theorem and Intercept theorem are attributed to Thales. The former, which states that an angle inscribed in a semicircle is a right angle and it is for this reason that Thales is often hailed as the father of the deductive organization of mathematics and as the first true mathematician. Thales is also thought to be the earliest known man in history to whom specific mathematical discoveries have been attributed, another important figure in the development of Greek mathematics is Pythagoras of Samos. Like Thales, Pythagoras also traveled to Egypt and Babylon, then under the rule of Nebuchadnezzar, Pythagoras established an order called the Pythagoreans, which held knowledge and property in common and hence all of the discoveries by individual Pythagoreans were attributed to the order. And since in antiquity it was customary to give all credit to the master, aristotle for one refused to attribute anything specifically to Pythagoras as an individual and only discussed the work of the Pythagoreans as a group. One of the most important characteristics of the Pythagorean order was that it maintained that the pursuit of philosophical and mathematical studies was a basis for the conduct of life. Indeed, the philosophy and mathematics are said to have been coined by Pythagoras. From this love of knowledge came many achievements and it has been customarily said that the Pythagoreans discovered most of the material in the first two books of Euclids Elements. The reason it is not clear exactly what either Thales or Pythagoras actually did is that almost no documentation has survived. The only evidence comes from traditions recorded in such as Proclus’ commentary on Euclid written centuries later. Some of these works, such as Aristotle’s commentary on the Pythagoreans, are themselves only known from a few surviving fragments
Greek mathematics
–
Statue of Euclid in the Oxford University Museum of Natural History
Greek mathematics
–
An illustration of Euclid 's proof of the Pythagorean Theorem
Greek mathematics
–
The Antikythera mechanism, an ancient mechanical calculator.
11.
Euclid's Elements
–
Euclids Elements is a mathematical and geometric treatise consisting of 13 books attributed to the ancient Greek mathematician Euclid in Alexandria, Ptolemaic Egypt circa 300 BC. It is a collection of definitions, postulates, propositions, the books cover Euclidean geometry and the ancient Greek version of elementary number theory. Elements is the second-oldest extant Greek mathematical treatise after Autolycus On the Moving Sphere and it has proven instrumental in the development of logic and modern science. According to Proclus, the element was used to describe a theorem that is all-pervading. The word element in the Greek language is the same as letter and this suggests that theorems in the Elements should be seen as standing in the same relation to geometry as letters to language. Euclids Elements has been referred to as the most successful and influential textbook ever written, for centuries, when the quadrivium was included in the curriculum of all university students, knowledge of at least part of Euclids Elements was required of all students. Not until the 20th century, by which time its content was taught through other school textbooks. Scholars believe that the Elements is largely a collection of theorems proven by other mathematicians, the Elements may have been based on an earlier textbook by Hippocrates of Chios, who also may have originated the use of letters to refer to figures. This manuscript, the Heiberg manuscript, is from a Byzantine workshop around 900 and is the basis of modern editions, papyrus Oxyrhynchus 29 is a tiny fragment of an even older manuscript, but only contains the statement of one proposition. Although known to, for instance, Cicero, no record exists of the text having been translated into Latin prior to Boethius in the fifth or sixth century. The Arabs received the Elements from the Byzantines around 760, this version was translated into Arabic under Harun al Rashid circa 800, the Byzantine scholar Arethas commissioned the copying of one of the extant Greek manuscripts of Euclid in the late ninth century. Although known in Byzantium, the Elements was lost to Western Europe until about 1120, the first printed edition appeared in 1482, and since then it has been translated into many languages and published in about a thousand different editions. Theons Greek edition was recovered in 1533, in 1570, John Dee provided a widely respected Mathematical Preface, along with copious notes and supplementary material, to the first English edition by Henry Billingsley. Copies of the Greek text still exist, some of which can be found in the Vatican Library, the manuscripts available are of variable quality, and invariably incomplete. By careful analysis of the translations and originals, hypotheses have been made about the contents of the original text, ancient texts which refer to the Elements itself, and to other mathematical theories that were current at the time it was written, are also important in this process. Such analyses are conducted by J. L. Heiberg and Sir Thomas Little Heath in their editions of the text, also of importance are the scholia, or annotations to the text. These additions, which distinguished themselves from the main text. The Elements is still considered a masterpiece in the application of logic to mathematics, in historical context, it has proven enormously influential in many areas of science
Euclid's Elements
–
The frontispiece of Sir Henry Billingsley's first English version of Euclid's Elements, 1570
Euclid's Elements
–
A fragment of Euclid's "Elements" on part of the Oxyrhynchus papyri
Euclid's Elements
–
An illumination from a manuscript based on Adelard of Bath 's translation of the Elements, c. 1309–1316; Adelard's is the oldest surviving translation of the Elements into Latin, done in the 12th-century work and translated from Arabic.
Euclid's Elements
–
Euclidis – Elementorum libri XV Paris, Hieronymum de Marnef & Guillaume Cavelat, 1573 (second edition after the 1557 ed.); in-8, 350, (2)pp. THOMAS-STANFORD, Early Editions of Euclid's Elements, n°32. Mentioned in T.L. Heath's translation. Private collection Hector Zenil.
12.
David Hilbert
–
David Hilbert was a German mathematician. He is recognized as one of the most influential and universal mathematicians of the 19th, Hilbert discovered and developed a broad range of fundamental ideas in many areas, including invariant theory and the axiomatization of geometry. He also formulated the theory of Hilbert spaces, one of the foundations of functional analysis, Hilbert adopted and warmly defended Georg Cantors set theory and transfinite numbers. A famous example of his leadership in mathematics is his 1900 presentation of a collection of problems set the course for much of the mathematical research of the 20th century. Hilbert and his students contributed significantly to establishing rigor and developed important tools used in mathematical physics. Hilbert is known as one of the founders of theory and mathematical logic. In late 1872, Hilbert entered the Friedrichskolleg Gymnasium, but, after a period, he transferred to. Upon graduation, in autumn 1880, Hilbert enrolled at the University of Königsberg, in early 1882, Hermann Minkowski, returned to Königsberg and entered the university. Hilbert knew his luck when he saw it, in spite of his fathers disapproval, he soon became friends with the shy, gifted Minkowski. In 1884, Adolf Hurwitz arrived from Göttingen as an Extraordinarius, Hilbert obtained his doctorate in 1885, with a dissertation, written under Ferdinand von Lindemann, titled Über invariante Eigenschaften spezieller binärer Formen, insbesondere der Kugelfunktionen. Hilbert remained at the University of Königsberg as a Privatdozent from 1886 to 1895, in 1895, as a result of intervention on his behalf by Felix Klein, he obtained the position of Professor of Mathematics at the University of Göttingen. During the Klein and Hilbert years, Göttingen became the preeminent institution in the mathematical world and he remained there for the rest of his life. Among Hilberts students were Hermann Weyl, chess champion Emanuel Lasker, Ernst Zermelo, john von Neumann was his assistant. At the University of Göttingen, Hilbert was surrounded by a circle of some of the most important mathematicians of the 20th century, such as Emmy Noether. Between 1902 and 1939 Hilbert was editor of the Mathematische Annalen, good, he did not have enough imagination to become a mathematician. Hilbert lived to see the Nazis purge many of the prominent faculty members at University of Göttingen in 1933 and those forced out included Hermann Weyl, Emmy Noether and Edmund Landau. One who had to leave Germany, Paul Bernays, had collaborated with Hilbert in mathematical logic and this was a sequel to the Hilbert-Ackermann book Principles of Mathematical Logic from 1928. Hermann Weyls successor was Helmut Hasse, about a year later, Hilbert attended a banquet and was seated next to the new Minister of Education, Bernhard Rust
David Hilbert
–
David Hilbert (1912)
David Hilbert
–
The Mathematical Institute in Göttingen. Its new building, constructed with funds from the Rockefeller Foundation, was opened by Hilbert and Courant in 1930.
David Hilbert
–
Hilbert's tomb: Wir müssen wissen Wir werden wissen
13.
Truth
–
Truth is most often used to mean being in accord with fact or reality, or fidelity to an original or standard. Truth may also often be used in modern contexts to refer to an idea of truth to self, the commonly understood opposite of truth is falsehood, which, correspondingly, can also take on a logical, factual, or ethical meaning. The concept of truth is discussed and debated in several contexts, including philosophy, art, Some philosophers view the concept of truth as basic, and unable to be explained in any terms that are more easily understood than the concept of truth itself. Commonly, truth is viewed as the correspondence of language or thought to an independent reality, other philosophers take this common meaning to be secondary and derivative. On this view, the conception of truth as correctness is a derivation from the concepts original essence. Various theories and views of truth continue to be debated among scholars, philosophers, language and words are a means by which humans convey information to one another and the method used to determine what is a truth is termed a criterion of truth. The English word truth is derived from Old English tríewþ, tréowþ, trýwþ, Middle English trewþe, cognate to Old High German triuwida, like troth, it is a -th nominalisation of the adjective true. Old Norse trú, faith, word of honour, religious faith, thus, truth involves both the quality of faithfulness, fidelity, loyalty, sincerity, veracity, and that of agreement with fact or reality, in Anglo-Saxon expressed by sōþ. All Germanic languages besides English have introduced a distinction between truth fidelity and truth factuality. To express factuality, North Germanic opted for nouns derived from sanna to assert, affirm, while continental West Germanic opted for continuations of wâra faith, trust, pact. Romance languages use terms following the Latin veritas, while the Greek aletheia, Russian pravda, each presents perspectives that are widely shared by published scholars. However, the theories are not universally accepted. More recently developed deflationary or minimalist theories of truth have emerged as competitors to the substantive theories. Minimalist reasoning centres around the notion that the application of a term like true to a statement does not assert anything significant about it, for instance, anything about its nature. Minimalist reasoning realises truth as a label utilised in general discourse to express agreement, to stress claims, correspondence theories emphasise that true beliefs and true statements correspond to the actual state of affairs. This type of theory stresses a relationship between thoughts or statements on one hand, and things or objects on the other and it is a traditional model tracing its origins to ancient Greek philosophers such as Socrates, Plato, and Aristotle. This class of theories holds that the truth or the falsity of a representation is determined in principle entirely by how it relates to things, Aquinas also restated the theory as, A judgment is said to be true when it conforms to the external reality. Many modern theorists have stated that this ideal cannot be achieved without analysing additional factors, for example, language plays a role in that all languages have words to represent concepts that are virtually undefined in other languages
Truth
–
Time Saving Truth from Falsehood and Envy, François Lemoyne, 1737
Truth
–
Truth, holding a mirror and a serpent (1896). Olin Levi Warner, Library of Congress Thomas Jefferson Building, Washington, D.C.
Truth
–
An angel carrying the banner of "Truth", Roslin, Midlothian
Truth
–
Walter Seymour Allward 's Veritas (Truth) outside Supreme Court of Canada, Ottawa, Ontario Canada
14.
Definition
–
A definition is a statement of the meaning of a term. Definitions can be classified into two categories, intensional definitions and extensional definitions. Another important category of definitions is the class of ostensive definitions, a term may have many different senses and multiple meanings, and thus require multiple definitions. In mathematics, a definition is used to give a meaning to a new term. Definitions and axioms are the basis on all of mathematics is constructed. In modern usage, a definition is something, typically expressed in words, the word or group of words that is to be defined is called the definiendum, and the word, group of words, or action that defines it is called the definiens. In the definition An elephant is a large gray animal native to Asia and Africa, the elephant is the definiendum. Note that the definiens is not the meaning of the word defined, there are many sub-types of definitions, often specific to a given field of knowledge or study. An intensional definition, also called a connotative definition, specifies the necessary, any definition that attempts to set out the essence of something, such as that by genus and differentia, is an intensional definition. An extensional definition, also called a denotative definition, of a concept or term specifies its extension and it is a list naming every object that is a member of a specific set. An extensional definition would be the list of wrath, greed, sloth, pride, lust, envy, a genus–differentia definition is a type of intensional definition that takes a large category and narrows it down to a smaller category by a distinguishing characteristic. The differentia, The portion of the new definition that is not provided by the genus, for example, consider the following genus-differentia definitions, a triangle, A plane figure that has three straight bounding sides. A quadrilateral, A plane figure that has four straight bounding sides and those definitions can be expressed as a genus and two differentiae. It is possible to have two different genus-differentia definitions that describe the same term, especially when the term describes the overlap of two large categories, for instance, both of these genus-differentia definitions of square are equally acceptable, a square, a rectangle that is a rhombus. A square, a rhombus that is a rectangle, thus, a square is a member of both the genus rectangle and the genus rhombus. One important form of the definition is ostensive definition. This gives the meaning of a term by pointing, in the case of an individual, to the thing itself, or in the case of a class, to examples of the right kind. So one can explain who Alice is by pointing her out to another, or what a rabbit is by pointing at several, the process of ostensive definition itself was critically appraised by Ludwig Wittgenstein
Definition
–
A definition states the meaning of a word using other words. This is sometimes challenging. Common dictionaries contain lexical, descriptive definitions, but there are various types of definition - all with different purposes and focuses.
15.
Galileo Galilei
–
Galileo Galilei was an Italian polymath, astronomer, physicist, engineer, philosopher, and mathematician. He played a role in the scientific revolution of the seventeenth century. Galileo also worked in applied science and technology, inventing an improved military compass, Galileos championing of heliocentrism and Copernicanism was controversial during his lifetime, when most subscribed to either geocentrism or the Tychonic system. He met with opposition from astronomers, who doubted heliocentrism because of the absence of a stellar parallax. He was tried by the Inquisition, found vehemently suspect of heresy and he spent the rest of his life under house arrest. He has been called the father of observational astronomy, the father of modern physics, the father of scientific method, and the father of science. Galileo was born in Pisa, Italy, on 15 February 1564, the first of six children of Vincenzo Galilei, a famous lutenist, composer, and music theorist, and Giulia, three of Galileos five siblings survived infancy. The youngest, Michelangelo, also became a noted lutenist and composer although he contributed to financial burdens during Galileos young adulthood, Michelangelo was unable to contribute his fair share of their fathers promised dowries to their brothers-in-law, who would later attempt to seek legal remedies for payments due. Michelangelo would also occasionally have to borrow funds from Galileo to support his musical endeavours and these financial burdens may have contributed to Galileos early fire to develop inventions that would bring him additional income. When Galileo Galilei was eight, his family moved to Florence and he then was educated in the Vallombrosa Abbey, about 30 km southeast of Florence. Galileo Bonaiuti was buried in the church, the Basilica of Santa Croce in Florence. It was common for mid-sixteenth century Tuscan families to name the eldest son after the parents surname, hence, Galileo Galilei was not necessarily named after his ancestor Galileo Bonaiuti. The Italian male given name Galileo derives from the Latin Galilaeus, meaning of Galilee, the biblical roots of Galileos name and surname were to become the subject of a famous pun. In 1614, during the Galileo affair, one of Galileos opponents, in it he made a point of quoting Acts 1,11, Ye men of Galilee, why stand ye gazing up into heaven. Despite being a genuinely pious Roman Catholic, Galileo fathered three children out of wedlock with Marina Gamba and they had two daughters, Virginia and Livia, and a son, Vincenzo. Their only worthy alternative was the religious life, both girls were accepted by the convent of San Matteo in Arcetri and remained there for the rest of their lives. Virginia took the name Maria Celeste upon entering the convent and she died on 2 April 1634, and is buried with Galileo at the Basilica of Santa Croce, Florence. Livia took the name Sister Arcangela and was ill for most of her life, Vincenzo was later legitimised as the legal heir of Galileo and married Sestilia Bocchineri
Galileo Galilei
–
Portrait of Galileo Galilei by Giusto Sustermans
Galileo Galilei
–
Galileo's beloved elder daughter, Virginia (Sister Maria Celeste), was particularly devoted to her father. She is buried with him in his tomb in the Basilica of Santa Croce, Florence.
Galileo Galilei
–
Galileo Galilei. Portrait by Leoni
Galileo Galilei
–
Cristiano Banti 's 1857 painting Galileo facing the Roman Inquisition
16.
Benjamin Peirce
–
Benjamin Peirce was an American mathematician who taught at Harvard University for approximately 50 years. He made contributions to celestial mechanics, statistics, number theory, algebra, and he was the son of Benjamin Peirce, later librarian of Harvard, and Lydia Ropes Nichols Peirce. After graduating from Harvard, he remained as a tutor, and was appointed professor of mathematics in 1831. He added astronomy to his portfolio in 1842, and remained as Harvard professor until his death. In addition, he was instrumental in the development of Harvards science curriculum, served as the college librarian, Benjamin Peirce is often regarded as the earliest American scientist whose research was recognized as world class. He was an apologist for slavery, opining that it should be condoned if it was used to allow an elite to pursue scientific enquiry, in number theory, he proved there is no odd perfect number with fewer than four prime factors. In algebra, he was notable for the study of associative algebras and he first introduced the terms idempotent and nilpotent in 1870 to describe elements of these algebras, and he also introduced the Peirce decomposition. In the philosophy of mathematics, he known for the statement that Mathematics is the science that draws necessary conclusions. Peirces definition of mathematics was credited by his son, Charles Sanders Peirce, like George Boole, Peirce believed that mathematics could be used to study logic. These ideas were developed by Charles Sanders Peirce, who noted that logic also includes the study of faulty reasoning. In contrast, the later logicist program of Gottlob Frege and Bertrand Russell attempted to base mathematics on logic, Peirce proposed what came to be known as Peirces Criterion for the statistical treatment of outliers, that is, of apparently extreme observations. His ideas were developed by Charles Sanders Peirce. Peirce was a witness in the Howland will forgery trial. Their analysis of the questioned signature showed that it resembled another particular handwriting example so closely that the chances of such a match were statistically extremely remote and he was devoutly religious, though he seldom published his theological thoughts. Peirce credited God as shaping nature in ways that account for the efficacy of pure mathematics in describing empirical phenomena, Peirce viewed mathematics as study of Gods work by Gods creatures, according to an encyclopedia. He married Sarah Hunt Mills, the daughter of U. S, the lunar crater Peirce is named for Peirce. Post-doctoral positions in Harvard Universitys mathematics department are named in his honor as Benjamin Peirce Fellows, the United States Coast Survey ship USCS Benjamin Peirce, in commission from 1855 to 1868, was named for him. An Elementary Treatise on Plane and Spherical Trigonometry, Boston, James Munroe, google Eprints of successive editions 1840–1862
Benjamin Peirce
–
Benjamin Peirce
Benjamin Peirce
–
With Louis Agassiz
17.
Engineering
–
The term Engineering is derived from the Latin ingenium, meaning cleverness and ingeniare, meaning to contrive, devise. Engineering has existed since ancient times as humans devised fundamental inventions such as the wedge, lever, wheel, each of these inventions is essentially consistent with the modern definition of engineering. The term engineering is derived from the engineer, which itself dates back to 1390 when an engineer originally referred to a constructor of military engines. In this context, now obsolete, a referred to a military machine. Notable examples of the obsolete usage which have survived to the present day are military engineering corps, the word engine itself is of even older origin, ultimately deriving from the Latin ingenium, meaning innate quality, especially mental power, hence a clever invention. The earliest civil engineer known by name is Imhotep, as one of the officials of the Pharaoh, Djosèr, he probably designed and supervised the construction of the Pyramid of Djoser at Saqqara in Egypt around 2630–2611 BC. Ancient Greece developed machines in both civilian and military domains, the Antikythera mechanism, the first known mechanical computer, and the mechanical inventions of Archimedes are examples of early mechanical engineering. In the Middle Ages, the trebuchet was developed, the first steam engine was built in 1698 by Thomas Savery. The development of this gave rise to the Industrial Revolution in the coming decades. With the rise of engineering as a profession in the 18th century, similarly, in addition to military and civil engineering, the fields then known as the mechanic arts became incorporated into engineering. The inventions of Thomas Newcomen and the Scottish engineer James Watt gave rise to mechanical engineering. The development of specialized machines and machine tools during the revolution led to the rapid growth of mechanical engineering both in its birthplace Britain and abroad. John Smeaton was the first self-proclaimed civil engineer and is regarded as the father of civil engineering. He was an English civil engineer responsible for the design of bridges, canals, harbours and he was also a capable mechanical engineer and an eminent physicist. Smeaton designed the third Eddystone Lighthouse where he pioneered the use of hydraulic lime and his lighthouse remained in use until 1877 and was dismantled and partially rebuilt at Plymouth Hoe where it is known as Smeatons Tower. The United States census of 1850 listed the occupation of engineer for the first time with a count of 2,000, there were fewer than 50 engineering graduates in the U. S. before 1865. In 1870 there were a dozen U. S. mechanical engineering graduates, in 1890 there were 6,000 engineers in civil, mining, mechanical and electrical. There was no chair of applied mechanism and applied mechanics established at Cambridge until 1875, the theoretical work of James Maxwell and Heinrich Hertz in the late 19th century gave rise to the field of electronics
Engineering
–
The steam engine, a major driver in the Industrial Revolution, underscores the importance of engineering in modern history. This beam engine is on display in the Technical University of Madrid.
Engineering
–
Relief map of the Citadel of Lille, designed in 1668 by Vauban, the foremost military engineer of his age.
Engineering
–
The Ancient Romans built aqueducts to bring a steady supply of clean fresh water to cities and towns in the empire.
Engineering
–
The International Space Station represents a modern engineering challenge from many disciplines.
18.
Finance
–
Finance is a field that deals with the study of investments. It includes the dynamics of assets and liabilities over time under conditions of different degrees of uncertainty, Finance can also be defined as the science of money management. Finance aims to price assets based on their level and their expected rate of return. Finance can be broken into three different sub-categories, public finance, corporate finance and personal finance. g, health and property insurance, investing and saving for retirement. Personal finance may also involve paying for a loan, or debt obligations, net worth is a persons balance sheet, calculated by adding up all assets under that persons control, minus all liabilities of the household, at one point in time. Household cash flow totals up all the sources of income within a year. From this analysis, the financial planner can determine to what degree, adequate protection, the analysis of how to protect a household from unforeseen risks. These risks can be divided into the following, liability, property, death, disability, health, some of these risks may be self-insurable, while most will require the purchase of an insurance contract. Determining how much insurance to get, at the most cost effective terms requires knowledge of the market for personal insurance, business owners, professionals, athletes and entertainers require specialized insurance professionals to adequately protect themselves. Since insurance also enjoys some tax benefits, utilizing insurance investment products may be a piece of the overall investment planning. Tax planning, typically the income tax is the single largest expense in a household, managing taxes is not a question of if you will pay taxes, but when and how much. Government gives many incentives in the form of tax deductions and credits, most modern governments use a progressive tax. Typically, as ones income grows, a marginal rate of tax must be paid. Understanding how to take advantage of the tax breaks when planning ones personal finances can make a significant impact in which it can later save you money in the long term. Investment and accumulation goals, planning how to accumulate enough money - for large purchases, major reasons to accumulate assets include, purchasing a house or car, starting a business, paying for education expenses, and saving for retirement. Achieving these goals requires projecting what they will cost, and when you need to withdraw funds that will be necessary to be able to achieve these goals, a major risk to the household in achieving their accumulation goal is the rate of price increases over time, or inflation. Using net present value calculators, the planner will suggest a combination of asset earmarking. In order to overcome the rate of inflation, the investment portfolio has to get a higher rate of return, managing these portfolio risks is most often accomplished using asset allocation, which seeks to diversify investment risk and opportunity
Finance
–
London Stock Exchange, global center of finance.
Finance
Finance
–
Wall Street, the center of American finance.
19.
Applied mathematics
–
Applied mathematics is a branch of mathematics that deals with mathematical methods that find use in science, engineering, business, computer science, and industry. Thus, applied mathematics is a combination of science and specialized knowledge. The term applied mathematics also describes the professional specialty in which work on practical problems by formulating and studying mathematical models. The activity of applied mathematics is thus connected with research in pure mathematics. Historically, applied mathematics consisted principally of applied analysis, most notably differential equations, approximation theory, quantitative finance is now taught in mathematics departments across universities and mathematical finance is considered a full branch of applied mathematics. Engineering and computer science departments have made use of applied mathematics. Today, the applied mathematics is used in a broader sense. It includes the areas noted above as well as other areas that have become increasingly important in applications. Even fields such as number theory that are part of mathematics are now important in applications. There is no consensus as to what the various branches of applied mathematics are, such categorizations are made difficult by the way mathematics and science change over time, and also by the way universities organize departments, courses, and degrees. Many mathematicians distinguish between applied mathematics, which is concerned with methods, and the applications of mathematics within science. Mathematicians such as Poincaré and Arnold deny the existence of applied mathematics, similarly, non-mathematicians blend applied mathematics and applications of mathematics. The use and development of mathematics to industrial problems is also called industrial mathematics. Historically, mathematics was most important in the sciences and engineering. Academic institutions are not consistent in the way they group and label courses, programs, at some schools, there is a single mathematics department, whereas others have separate departments for Applied Mathematics and Mathematics. It is very common for Statistics departments to be separated at schools with graduate programs, many applied mathematics programs consist of primarily cross-listed courses and jointly appointed faculty in departments representing applications. Some Ph. D. programs in applied mathematics require little or no coursework outside of mathematics, in some respects this difference reflects the distinction between application of mathematics and applied mathematics. Research universities dividing their mathematics department into pure and applied sections include MIT, brigham Young University also has an Applied and Computational Emphasis, a program that allows student to graduate with a Mathematics degree, with an emphasis in Applied Math
Applied mathematics
–
Efficient solutions to the vehicle routing problem require tools from combinatorial optimization and integer programming.
20.
Statistics
–
Statistics is a branch of mathematics dealing with the collection, analysis, interpretation, presentation, and organization of data. In applying statistics to, e. g. a scientific, industrial, or social problem, populations can be diverse topics such as all people living in a country or every atom composing a crystal. Statistics deals with all aspects of data including the planning of data collection in terms of the design of surveys, statistician Sir Arthur Lyon Bowley defines statistics as Numerical statements of facts in any department of inquiry placed in relation to each other. When census data cannot be collected, statisticians collect data by developing specific experiment designs, representative sampling assures that inferences and conclusions can safely extend from the sample to the population as a whole. In contrast, an observational study does not involve experimental manipulation, inferences on mathematical statistics are made under the framework of probability theory, which deals with the analysis of random phenomena. A standard statistical procedure involves the test of the relationship between two data sets, or a data set and a synthetic data drawn from idealized model. A hypothesis is proposed for the relationship between the two data sets, and this is compared as an alternative to an idealized null hypothesis of no relationship between two data sets. Rejecting or disproving the hypothesis is done using statistical tests that quantify the sense in which the null can be proven false. Working from a hypothesis, two basic forms of error are recognized, Type I errors and Type II errors. Multiple problems have come to be associated with this framework, ranging from obtaining a sufficient sample size to specifying an adequate null hypothesis, measurement processes that generate statistical data are also subject to error. Many of these errors are classified as random or systematic, the presence of missing data or censoring may result in biased estimates and specific techniques have been developed to address these problems. Statistics continues to be an area of research, for example on the problem of how to analyze Big data. Statistics is a body of science that pertains to the collection, analysis, interpretation or explanation. Some consider statistics to be a mathematical science rather than a branch of mathematics. While many scientific investigations make use of data, statistics is concerned with the use of data in the context of uncertainty, mathematical techniques used for this include mathematical analysis, linear algebra, stochastic analysis, differential equations, and measure-theoretic probability theory. In applying statistics to a problem, it is practice to start with a population or process to be studied. Populations can be diverse topics such as all living in a country or every atom composing a crystal. Ideally, statisticians compile data about the entire population and this may be organized by governmental statistical institutes
Statistics
–
Scatter plots are used in descriptive statistics to show the observed relationships between different variables.
Statistics
–
More probability density is found as one gets closer to the expected (mean) value in a normal distribution. Statistics used in standardized testing assessment are shown. The scales include standard deviations, cumulative percentages, percentile equivalents, Z-scores, T-scores, standard nines, and percentages in standard nines.
Statistics
–
Gerolamo Cardano, the earliest pioneer on the mathematics of probability.
Statistics
–
Karl Pearson, a founder of mathematical statistics.
21.
Game theory
–
Game theory is the study of mathematical models of conflict and cooperation between intelligent rational decision-makers. Game theory is used in economics, political science, and psychology, as well as logic, computer science. Originally, it addressed zero-sum games, in one persons gains result in losses for the other participants. Today, game theory applies to a range of behavioral relations, and is now an umbrella term for the science of logical decision making in humans, animals. Modern game theory began with the idea regarding the existence of equilibria in two-person zero-sum games. Von Neumanns original proof used Brouwer fixed-point theorem on continuous mappings into compact convex sets and his paper was followed by the 1944 book Theory of Games and Economic Behavior, co-written with Oskar Morgenstern, which considered cooperative games of several players. The second edition of this provided an axiomatic theory of expected utility. This theory was developed extensively in the 1950s by many scholars, Game theory was later explicitly applied to biology in the 1970s, although similar developments go back at least as far as the 1930s. Game theory has been recognized as an important tool in many fields. With the Nobel Memorial Prize in Economic Sciences going to game theorist Jean Tirole in 2014, John Maynard Smith was awarded the Crafoord Prize for his application of game theory to biology. Early discussions of examples of two-person games occurred long before the rise of modern, the first known discussion of game theory occurred in a letter written by Charles Waldegrave, an active Jacobite, and uncle to James Waldegrave, a British diplomat, in 1713. In this letter, Waldegrave provides a mixed strategy solution to a two-person version of the card game le Her. James Madison made what we now recognize as an analysis of the ways states can be expected to behave under different systems of taxation. In 1913 Ernst Zermelo published Über eine Anwendung der Mengenlehre auf die Theorie des Schachspiels and it proved that the optimal chess strategy is strictly determined. This paved the way for more general theorems, the Danish mathematician Zeuthen proved that the mathematical model had a winning strategy by using Brouwers fixed point theorem. In his 1938 book Applications aux Jeux de Hasard and earlier notes, Borel conjectured that non-existence of mixed-strategy equilibria in two-person zero-sum games would occur, a conjecture that was proved false. Game theory did not really exist as a field until John von Neumann published a paper in 1928. Von Neumanns original proof used Brouwers fixed-point theorem on continuous mappings into compact convex sets and his paper was followed by his 1944 book Theory of Games and Economic Behavior co-authored with Oskar Morgenstern
Game theory
–
An extensive form game
22.
Pure mathematics
–
Broadly speaking, pure mathematics is mathematics that studies entirely abstract concepts. Even though the pure and applied viewpoints are distinct philosophical positions, in there is much overlap in the activity of pure. To develop accurate models for describing the world, many applied mathematicians draw on tools. On the other hand, many pure mathematicians draw on natural and social phenomena as inspiration for their abstract research, ancient Greek mathematicians were among the earliest to make a distinction between pure and applied mathematics. Plato helped to create the gap between arithmetic, now called number theory, and logistic, now called arithmetic. Euclid of Alexandria, when asked by one of his students of what use was the study of geometry, the term itself is enshrined in the full title of the Sadleirian Chair, founded in the mid-nineteenth century. The idea of a discipline of pure mathematics may have emerged at that time. The generation of Gauss made no sweeping distinction of the kind, in the following years, specialisation and professionalisation started to make a rift more apparent. At the start of the twentieth century mathematicians took up the axiomatic method, in fact in an axiomatic setting rigorous adds nothing to the idea of proof. Pure mathematics, according to a view that can be ascribed to the Bourbaki group, is what is proved, Pure mathematician became a recognized vocation, achievable through training. One central concept in mathematics is the idea of generality. One can use generality to avoid duplication of effort, proving a general instead of having to prove separate cases independently. Generality can facilitate connections between different branches of mathematics, category theory is one area of mathematics dedicated to exploring this commonality of structure as it plays out in some areas of math. Generalitys impact on intuition is both dependent on the subject and a matter of preference or learning style. Often generality is seen as a hindrance to intuition, although it can function as an aid to it. Each of these branches of abstract mathematics have many sub-specialties. A steep rise in abstraction was seen mid 20th century, in practice, however, these developments led to a sharp divergence from physics, particularly from 1950 to 1983. Later this was criticised, for example by Vladimir Arnold, as too much Hilbert, the point does not yet seem to be settled, in that string theory pulls one way, while discrete mathematics pulls back towards proof as central
Pure mathematics
–
An illustration of the Banach–Tarski paradox, a famous result in pure mathematics. Although it is proven that it is possible to convert one sphere into two using nothing but cuts and rotations, the transformation involves objects that cannot exist in the physical world.
23.
History of mathematics
–
Before the modern age and the worldwide spread of knowledge, written examples of new mathematical developments have come to light only in a few locales. The most ancient mathematical texts available are Plimpton 322, the Rhind Mathematical Papyrus, All of these texts concern the so-called Pythagorean theorem, which seems to be the most ancient and widespread mathematical development after basic arithmetic and geometry. Greek mathematics greatly refined the methods and expanded the subject matter of mathematics, Chinese mathematics made early contributions, including a place value system. Islamic mathematics, in turn, developed and expanded the known to these civilizations. Many Greek and Arabic texts on mathematics were then translated into Latin, from ancient times through the Middle Ages, periods of mathematical discovery were often followed by centuries of stagnation. Beginning in Renaissance Italy in the 16th century, new mathematical developments, the origins of mathematical thought lie in the concepts of number, magnitude, and form. Modern studies of cognition have shown that these concepts are not unique to humans. Such concepts would have part of everyday life in hunter-gatherer societies. The idea of the number concept evolving gradually over time is supported by the existence of languages which preserve the distinction between one, two, and many, but not of numbers larger than two. Prehistoric artifacts discovered in Africa, dated 20,000 years old or more suggest early attempts to quantify time. The Ishango bone, found near the headwaters of the Nile river, may be more than 20,000 years old, common interpretations are that the Ishango bone shows either the earliest known demonstration of sequences of prime numbers or a six-month lunar calendar. He also writes that no attempt has been made to explain why a tally of something should exhibit multiples of two, prime numbers between 10 and 20, and some numbers that are almost multiples of 10, predynastic Egyptians of the 5th millennium BC pictorially represented geometric designs. All of the above are disputed however, and the currently oldest undisputed mathematical documents are from Babylonian, Babylonian mathematics refers to any mathematics of the peoples of Mesopotamia from the days of the early Sumerians through the Hellenistic period almost to the dawn of Christianity. The majority of Babylonian mathematical work comes from two widely separated periods, The first few hundred years of the second millennium BC, and it is named Babylonian mathematics due to the central role of Babylon as a place of study. Later under the Arab Empire, Mesopotamia, especially Baghdad, once again became an important center of study for Islamic mathematics, in contrast to the sparsity of sources in Egyptian mathematics, our knowledge of Babylonian mathematics is derived from more than 400 clay tablets unearthed since the 1850s. Written in Cuneiform script, tablets were inscribed whilst the clay was moist, Some of these appear to be graded homework. The earliest evidence of written mathematics dates back to the ancient Sumerians and they developed a complex system of metrology from 3000 BC. From around 2500 BC onwards, the Sumerians wrote multiplication tables on clay tablets and dealt with geometrical exercises, the earliest traces of the Babylonian numerals also date back to this period
History of mathematics
–
A proof from Euclid 's Elements, widely considered the most influential textbook of all time.
History of mathematics
–
The Babylonian mathematical tablet Plimpton 322, dated to 1800 BC.
History of mathematics
–
Image of Problem 14 from the Moscow Mathematical Papyrus. The problem includes a diagram indicating the dimensions of the truncated pyramid.
History of mathematics
–
One of the oldest surviving fragments of Euclid's Elements, found at Oxyrhynchus and dated to circa AD 100. The diagram accompanies Book II, Proposition 5.
24.
Mayan numerals
–
The Maya numeral system is a vigesimal positional notation used in the Maya civilization to represent numbers. The numerals are made up of three symbols, zero, one and five, for example, thirteen is written as three dots in a horizontal row above two horizontal lines stacked above each other. Numbers after 19 were written vertically in powers of twenty, for example, thirty-three would be written as one dot above three dots, which are in turn atop two lines. The first dot represents one twenty or 1×20, which is added to three dots and two bars, or thirteen, upon reaching 202 or 400, another row is started. The number 429 would be written as one dot above one dot above four dots, the powers of twenty are numerals, just as the Hindu-Arabic numeral system uses powers of tens. Other than the bar and dot notation, Maya numerals can be illustrated by face type glyphs or pictures, the face glyph for a number represents the deity associated with the number. These face number glyphs were used, and are mostly seen on some of the most elaborate monumental carving. Addition and subtraction, Adding and subtracting numbers below 20 using Maya numerals is very simple, addition is performed by combining the numeric symbols at each level, If five or more dots result from the combination, five dots are removed and replaced by a bar. If four or more bars result, four bars are removed, similarly with subtraction, remove the elements of the subtrahend symbol from the minuend symbol, If there are not enough dots in a minuend position, a bar is replaced by five dots. If there are not enough bars, a dot is removed from the next higher minuend symbol in the column, the Maya/Mesoamerican Long Count calendar required the use of zero as a place-holder within its vigesimal positional numeral system. A shell glyph – – was used as a symbol for these Long Count dates. However, since the eight earliest Long Count dates appear outside the Maya homeland, it is assumed that the use of zero predated the Maya, indeed, many of the earliest Long Count dates were found within the Olmec heartland. However, the Olmec civilization had come to an end by the 4th century BC, in the Long Count portion of the Maya calendar, a variation on the strictly vigesimal numbering is used. The Long Count changes in the place value, it is not 20×20 =400, as would otherwise be expected. This is supposed to be because 360 is roughly the number of days in a year, subsequent place values return to base-twenty. In fact, every known example of large numbers uses this modified vigesimal system and it is reasonable to assume, but not proven by any evidence, that the normal system in use was a pure base-20 system. Maya Mathematics - online converter from decimal numeration to Maya numeral notation, anthropomorphic Maya numbers - online story of number representations
Mayan numerals
–
Numeral systems
Mayan numerals
–
Maya numerals
Mayan numerals
–
Detail showing three columns of glyphs from La Mojarra Stela 1. The left column uses Maya numerals to show a Long Count date of 8.5.16.9.7, or 156 CE.
25.
Tally sticks
–
A tally stick was an ancient memory aid device used to record and document numbers, quantities, or even messages. Tally sticks first appear as animal bones carved with notches during the Upper Paleolithic, historical reference is made by Pliny the Elder about the best wood to use for tallies, and by Marco Polo who mentions the use of the tally in China. Tallies have been used for purposes such as messaging and scheduling. Principally, there are two different kinds of sticks, the single tally and the split tally. A common form of the kind of primitive counting device is seen in various kinds of prayer beads. A number of artefacts have been conjectured to be tally sticks. It is a dark brown length of bone, the fibula of a baboon and it has a series of tally marks carved in three columns running the length of the tool. It was found in 1960 in Belgian Congo, the Lebombo Bone is a baboons fibula with 29 distinct notches, discovered within the Border Cave in the Lebombo Mountains of Swaziland. The so-called Wolf bone is a prehistoric artefact discovered in 1937 in Czechoslovakia during excavations at Vestonice, Moravia, dated to the Aurignacian, approximately 30,000 years ago, the bone is marked with 55 marks which some believe to be tally marks. The head of an ivory Venus figurine was excavated close to the bone, the single tally stick was an elongated piece of bone, ivory, wood, or stone which is marked with a system of notches. The single tally stick serves predominantly mnemonic purposes, related to the single tally concept are messenger sticks, the knotted cords, khipus or quipus, as used by the Inca. Herodotus reported the use of a knotted cord by Darius I of Persia, the split tally was a technique which became common in medieval Europe, which was constantly short of money and predominantly illiterate, in order to record bilateral exchange and debts. A stick was marked with a system of notches and then split lengthwise and this way the two halves both record the same notches and each party to the transaction received one half of the marked stick as proof. Later this technique was refined in ways and became virtually tamper proof. One of the refinements was to make the two halves of the stick of different lengths, the longer part was called stock and was given to the party which had advanced money to the receiver. The shorter portion of the stick was called foil and was given to the party which had received the funds or goods, using this technique each of the parties had an identifiable record of the transaction. If one party tried to change the value of his half of the tally stick by adding more notches. The split tally was accepted as proof in medieval courts
Tally sticks
–
Medieval English split tally stick (front and reverse view). The stick is notched and inscribed to record a debt owed to the rural dean of Preston Candover, Hampshire, of a tithe of 20 d each on 32 sheep, amounting to a total sum of £2 13s. 4d.
Tally sticks
–
Single and split tallies from the Swiss Alps, 18th to early 20th century (Swiss Alpine Museum)
Tally sticks
–
Entrance gates to the UK National Archives, Kew, from Ruskin Avenue. The notched vertical elements were inspired by medieval tally sticks.
26.
Prehistoric
–
Prehistory means literally before history, from the Latin word for before, præ, and Greek ιστορία. Neighbouring civilisations were the first to follow, most other civilisations reached the end of prehistory during the Iron Age. The period when a culture is written about by others, but has not developed its own writing is known as the protohistory of the culture. By definition, there are no records from human prehistory. Clear techniques for dating were not well-developed until the 19th century and this article is concerned with human prehistory as defined here above. There are separate articles for the history of the Earth. However, for the race as a whole, prehistory ends when recorded history begins with the accounts of the ancient world around the 4th millennium BC. For example, in Egypt it is accepted that prehistory ended around 3200 BC, whereas in New Guinea the end of the prehistoric era is set much more recently. The three-age system is the periodization of prehistory into three consecutive time periods, named for their respective predominant tool-making technologies, Stone Age Bronze Age Iron Age. The notion of prehistory began to surface during the Enlightenment in the work of antiquarians who used the word primitive to describe societies that existed before written records, the first use of the word prehistory in English, however, occurred in the Foreign Quarterly Review in 1836. The main source for prehistory is archaeology, but some scholars are beginning to more use of evidence from the natural and social sciences. This view has been articulated by advocates of deep history, human population geneticists and historical linguists are also providing valuable insight for these questions. Human prehistory differs from history not only in terms of its chronology, restricted to material processes, remains and artifacts rather than written records, prehistory is anonymous. Because of this, reference terms that use, such as Neanderthal or Iron Age are modern labels with definitions sometimes subject to debate. Palaeolithic means Old Stone Age, and begins with the first use of stone tools, the Paleolithic is the earliest period of the Stone Age. The early part of the Palaeolithic is called the Lower Palaeolithic, evidence of control of fire by early humans during the Lower Palaeolithic Era is uncertain and has at best limited scholarly support. The most widely accepted claim is that H. erectus or H. ergaster made fires between 790,000 and 690,000 BP in a site at Bnot Yaakov Bridge, Israel. The use of fire enabled early humans to cook food, provide warmth, Early Homo sapiens originated some 200,000 years ago, ushering in the Middle Palaeolithic
Prehistoric
–
Massive stone pillars at Göbekli Tepe, in southeast Turkey, erected for ritual use by early Neolithic people 11,000 years ago.
Prehistoric
–
A prehistoric man and boy.
Prehistoric
–
Dugout canoe
Prehistoric
–
Entrance to the Ġgantija phase temple complex of Hagar Qim, Malta, 3900 BC.
27.
Season
–
A season is a division of the year marked by changes in weather, ecology and hours of daylight. Seasons result from the orbit of the Earth around the Sun. During May, June, and July, the northern hemisphere is exposed to direct sunlight because the hemisphere faces the sun. The same is true of the hemisphere in November, December. It is the tilt of the Earth that causes the Sun to be higher in the sky during the months which increases the solar flux. However, due to lag, June, July, and August are the hottest months in the northern hemisphere and December, January. In temperate and subpolar regions, four calendar-based seasons are recognized, spring, summer, autumn or fall. Ecologists often use a model for temperate climate regions, prevernal, vernal, estival, serotinal, autumnal. Many tropical regions have two seasons, the rainy, wet, or monsoon season and the dry season, some have a third cool, mild, or harmattan season. Seasons often held special significance for agrarian societies, whose lives revolved around planting and harvest times, in some parts of the world, some other seasons capture the timing of important ecological events such as hurricane season, tornado season, and wildfire season. The most historically important of these are the three seasons—flood, growth, and low water—which were previously defined by the annual flooding of the Nile in Egypt. The seasons result from the Earths axis of rotation being tilted with respect to its orbital plane by an angle of approximately 23.5 degrees, regardless of the time of year, the northern and southern hemispheres always experience opposite seasons. This is because during summer or winter, one part of the planet is directly exposed to the rays of the Sun than the other. For approximately half of the year, the northern hemisphere tips toward the Sun, for the other half of the year, the same happens, but in the southern hemisphere instead of the northern, with the maximum around December 21. The two instants when the Sun is directly overhead at the Equator are the equinoxes. Also at that moment, both the North Pole and the South Pole of the Earth are just on the terminator, and hence day and night are equally divided between the northern and southern hemispheres. Around the March equinox, the northern hemisphere will be experiencing spring as the hours of daylight increase, the effect of axial tilt is observable as the change in day length and altitude of the Sun at noon during a year. Between this effect and the daylight hours, the axial tilt of the Earth accounts for most of the seasonal variation in climate in both hemispheres
Season
–
Red and green trees in spring
Season
–
A tree in winter
Season
–
The six ecological seasons
Season
–
The four calendar seasons, depicted in an ancient Roman mosaic from Tunisia.
28.
Babylonia
–
Babylonia was an ancient Akkadian-speaking state and cultural area based in central-southern Mesopotamia. A small Amorite-ruled state emerged in 1894 BC, which contained at this time the city of Babylon. Babylon greatly expanded during the reign of Hammurabi in the first half of the 18th century BC, during the reign of Hammurabi and afterwards, Babylonia was called Māt Akkadī the country of Akkad in the Akkadian language. It was often involved in rivalry with its older fellow Akkadian-speaking state of Assyria in northern Mesopotamia and it retained the Sumerian language for religious use, but by the time Babylon was founded, this was no longer a spoken language, having been wholly subsumed by Akkadian. The earliest mention of the city of Babylon can be found in a tablet from the reign of Sargon of Akkad. During the 3rd millennium BC, a cultural symbiosis occurred between Sumerian and Akkadian-speakers, which included widespread bilingualism. The influence of Sumerian on Akkadian and vice versa is evident in all areas, from lexical borrowing on a scale, to syntactic, morphological. This has prompted scholars to refer to Sumerian and Akkadian in the millennium as a sprachbund. Traditionally, the religious center of all Mesopotamia was the city of Nippur. The empire eventually disintegrated due to decline, climate change and civil war. Sumer rose up again with the Third Dynasty of Ur in the late 22nd century BC and they also seem to have gained ascendancy over most of the territory of the Akkadian kings of Assyria in northern Mesopotamia for a time. The states of the south were unable to stem the Amorite advance, King Ilu-shuma of the Old Assyrian Empire in a known inscription describes his exploits to the south as follows, The freedom of the Akkadians and their children I established. I established their freedom from the border of the marshes and Ur and Nippur, Awal, past scholars originally extrapolated from this text that it means he defeated the invading Amorites to the south, but there is no explicit record of that. More recently, the text has been taken to mean that Asshur supplied the south with copper from Anatolia and these policies were continued by his successors Erishum I and Ikunum. During the first centuries of what is called the Amorite period and his reign was concerned with establishing statehood amongst a sea of other minor city states and kingdoms in the region. However Sumuabum appears never to have bothered to give himself the title of King of Babylon, suggesting that Babylon itself was only a minor town or city. He was followed by Sumu-la-El, Sabium, Apil-Sin, each of whom ruled in the same manner as Sumuabum. Sin-Muballit was the first of these Amorite rulers to be regarded officially as a king of Babylon, the Elamites occupied huge swathes of southern Mesopotamia, and the early Amorite rulers were largely held in vassalage to Elam
Babylonia
–
The extent of the Babylonian Empire at the start and end of Hammurabi's reign
Babylonia
–
Old Babylonian Cylinder Seal, hematite, The king makes an animal offering to Shamash. This seal was probably made in a workshop at Sippar.
Babylonia
–
Geography
29.
Taxation
–
A tax is a financial charge or other levy imposed upon a taxpayer by a state or the functional equivalent of a state to fund various public expenditures. A failure to pay, or evasion of or resistance to taxation, is punishable by law. Taxes consist of direct or indirect taxes and may be paid in money or as its labour equivalent, the legal definition and the economic definition of taxes differ in that economists do not regard many transfers to governments as taxes. For example, some transfers to the sector are comparable to prices. Examples include tuition at public universities and fees for utilities provided by local governments, governments also obtain resources by creating money and coins, through voluntary gifts, by imposing penalties, by borrowing, and by confiscating wealth. In modern taxation systems, governments levy taxes in money, but in-kind and corvée taxation are characteristic of traditional or pre-capitalist states, the method of taxation and the government expenditure of taxes raised is often highly debated in politics and economics. Tax collection is performed by a government agency such as the Canada Revenue Agency, when taxes are not fully paid, the state may impose civil penalties or criminal penalties on the non-paying entity or individual. The levying of taxes aims to raise revenue to fund governing and/or to alter prices in order to affect demand, States and their functional equivalents throughout history have used money provided by taxation to carry out many functions. A governments ability to raise taxes is called its fiscal capacity, when expenditures exceed tax revenue, a government accumulates debt. A portion of taxes may be used to service past debts, governments also use taxes to fund welfare and public services. These services can include education systems, pensions for the elderly, unemployment benefits, energy, water and waste management systems are also common public utilities. A tax effectively changes relative prices of products and they have therefore sought to identify the kind of tax system that would minimize this distortion. Governments use different kinds of taxes and vary the tax rates, historically, taxes on the poor supported the nobility, modern social-security systems aim to support the poor, the disabled, or the retired by taxes on those who are still working. A states tax system often reflects its communal values and the values of those in current political power. To create a system of taxation, a state must make choices regarding the distribution of the tax burden—who will pay taxes and how much they will pay—and how the taxes collected will be spent. In democratic nations where the public elects those in charge of establishing or administering the tax system, in countries where the public does not have a significant amount of influence over the system of taxation, that system may reflect more closely the values of those in power. All large businesses incur administrative costs in the process of delivering revenue collected from customers to the suppliers of the goods or services being purchased. Taxation is no different, the resource collected from the public through taxation is always greater than the amount which can be used by the government, the difference is called the compliance cost and includes the labour cost and other expenses incurred in complying with tax laws and rules
Taxation
–
Pieter Brueghel the Younger, The tax collector's office, 1640
Taxation
–
Taxation
Taxation
–
Egyptian peasants seized for non-payment of taxes. (Pyramid Age)
30.
Astronomy
–
Astronomy is a natural science that studies celestial objects and phenomena. It applies mathematics, physics, and chemistry, in an effort to explain the origin of those objects and phenomena and their evolution. Objects of interest include planets, moons, stars, galaxies, and comets, while the phenomena include supernovae explosions, gamma ray bursts, more generally, all astronomical phenomena that originate outside Earths atmosphere are within the purview of astronomy. A related but distinct subject, physical cosmology, is concerned with the study of the Universe as a whole, Astronomy is the oldest of the natural sciences. The early civilizations in recorded history, such as the Babylonians, Greeks, Indians, Egyptians, Nubians, Iranians, Chinese, during the 20th century, the field of professional astronomy split into observational and theoretical branches. Observational astronomy is focused on acquiring data from observations of astronomical objects, theoretical astronomy is oriented toward the development of computer or analytical models to describe astronomical objects and phenomena. The two fields complement each other, with theoretical astronomy seeking to explain the results and observations being used to confirm theoretical results. Astronomy is one of the few sciences where amateurs can play an active role, especially in the discovery. Amateur astronomers have made and contributed to many important astronomical discoveries, Astronomy means law of the stars. Astronomy should not be confused with astrology, the system which claims that human affairs are correlated with the positions of celestial objects. Although the two share a common origin, they are now entirely distinct. Generally, either the term astronomy or astrophysics may be used to refer to this subject, however, since most modern astronomical research deals with subjects related to physics, modern astronomy could actually be called astrophysics. Few fields, such as astrometry, are purely astronomy rather than also astrophysics, some titles of the leading scientific journals in this field includeThe Astronomical Journal, The Astrophysical Journal and Astronomy and Astrophysics. In early times, astronomy only comprised the observation and predictions of the motions of objects visible to the naked eye, in some locations, early cultures assembled massive artifacts that possibly had some astronomical purpose. Before tools such as the telescope were invented, early study of the stars was conducted using the naked eye, most of early astronomy actually consisted of mapping the positions of the stars and planets, a science now referred to as astrometry. From these observations, early ideas about the motions of the planets were formed, and the nature of the Sun, Moon, the Earth was believed to be the center of the Universe with the Sun, the Moon and the stars rotating around it. This is known as the model of the Universe, or the Ptolemaic system. The Babylonians discovered that lunar eclipses recurred in a cycle known as a saros
Astronomy
–
A star -forming region in the Large Magellanic Cloud, an irregular galaxy.
Astronomy
–
A giant Hubble mosaic of the Crab Nebula, a supernova remnant
Astronomy
–
19th century Sydney Observatory, Australia (1873)
Astronomy
–
19th century Quito Astronomical Observatory is located 12 minutes south of the Equator in Quito, Ecuador.
31.
Elementary arithmetic
–
Elementary arithmetic is the simplified portion of arithmetic that includes the operations of addition, subtraction, multiplication, and division. It should not be confused with elementary function arithmetic, elementary arithmetic starts with the natural numbers and the written symbols that represent them. Elementary arithmetic also includes fractions and negative numbers, which can be represented on a number line, digits are the entire set of symbols used to represent numbers. In a particular system, a single digit represents a different amount than any other digit. In modern usage, the Arabic numerals are the most common set of symbols, each single digit matches the following amounts,0, zero. Used in the absence of objects to be counted, for example, a different way of saying there are no sticks here, is to say the number of sticks here is 0.1, one. For example, here is one stick, I2, two, applied to a pair of items. Here are two sticks, I I3, three, here are three sticks, I I I4, four. Here are four sticks, I I I I5, five, here are five sticks, I I I I I6, six. Here are six sticks, I I I I I I7, here are seven sticks, I I I I I I I8, eight. Here are eight sticks, I I I I I I I I9, nine. Here are nine sticks, I I I I I I I I I Any numeral system defines the value of all numbers that more than one digit. The Hindu–Arabic numeral system includes positional notation to determine the value for any numeral, in this type of system, the increase in value for an additional digit includes one or more multiplications with the radix value and the result is added to the value of an adjacent digit. With Arabic numerals, the value of ten produces a value of twenty-one for the numeral 21. An additional multiplication with the radix value occurs for each additional digit, when two numbers are added together, the result is called a sum. The two numbers being added together are called addends, suppose you have two bags, one bag holding five apples and a second bag holding three apples. Grabbing a third, empty bag, move all the apples from the first, the third bag now holds eight apples. This illustrates the combination of three apples and five apples is eight apples, or more generally, three plus five is eight or three plus five equals eight or eight is the sum of three and five
Elementary arithmetic
–
The basic elementary arithmetic symbols.
32.
Subtraction
–
Subtraction is a mathematical operation that represents the operation of removing objects from a collection. It is signified by the minus sign, for example, in the picture on the right, there are 5 −2 apples—meaning 5 apples with 2 taken away, which is a total of 3 apples. It is anticommutative, meaning that changing the order changes the sign of the answer and it is not associative, meaning that when one subtracts more than two numbers, the order in which subtraction is performed matters. Subtraction of 0 does not change a number, subtraction also obeys predictable rules concerning related operations such as addition and multiplication. All of these rules can be proven, starting with the subtraction of integers and generalizing up through the real numbers, general binary operations that continue these patterns are studied in abstract algebra. Performing subtraction is one of the simplest numerical tasks, subtraction of very small numbers is accessible to young children. In primary education, students are taught to subtract numbers in the system, starting with single digits. Subtraction is written using the minus sign − between the terms, that is, in infix notation, the result is expressed with an equals sign. This is most common in accounting, formally, the number being subtracted is known as the subtrahend, while the number it is subtracted from is the minuend. All of this terminology derives from Latin, subtraction is an English word derived from the Latin verb subtrahere, which is in turn a compound of sub from under and trahere to pull, thus to subtract is to draw from below, take away. Using the gerundive suffix -nd results in subtrahend, thing to be subtracted, likewise from minuere to reduce or diminish, one gets minuend, thing to be diminished. Imagine a line segment of length b with the left end labeled a, starting from a, it takes b steps to the right to reach c. This movement to the right is modeled mathematically by addition, a + b = c, from c, it takes b steps to the left to get back to a. This movement to the left is modeled by subtraction, c − b = a, now, a line segment labeled with the numbers 1,2, and 3. From position 3, it takes no steps to the left to stay at 3 and it takes 2 steps to the left to get to position 1, so 3 −2 =1. This picture is inadequate to describe what would happen after going 3 steps to the left of position 3, to represent such an operation, the line must be extended. To subtract arbitrary natural numbers, one begins with a line containing every natural number, from 3, it takes 3 steps to the left to get to 0, so 3 −3 =0. But 3 −4 is still invalid since it leaves the line
Subtraction
–
Placard outside shop in Bordeaux advertising subtraction of 20% from the price of a second perfume
Subtraction
–
"5 − 2 = 3" (verbally, "five minus two equals three")
Subtraction
–
1 + … = 3
33.
Division (mathematics)
–
Division is one of the four basic operations of arithmetic, the others being addition, subtraction, and multiplication. The division of two numbers is the process of calculating the number of times one number is contained within one another. For example, in the picture on the right, the 20 apples are divided into groups of five apples, Division can also be thought of as the process of evaluating a fraction, and fractional notation is commonly used to represent division. Division is the inverse of multiplication, if a × b = c, then a = c ÷ b, as long as b is not zero. Division by zero is undefined for the numbers and most other contexts, because if b =0, then a cannot be deduced from b and c. In some contexts, division by zero can be defined although to a limited extent, in division, the dividend is divided by the divisor to get a quotient. In the above example,20 is the dividend, five is the divisor, in some cases, the divisor may not be contained fully by the dividend, for example,10 ÷3 leaves a remainder of one, as 10 is not a multiple of three. Sometimes this remainder is added to the quotient as a fractional part, but in the context of integer division, where numbers have no fractional part, the remainder is kept separately or discarded. Besides dividing apples, division can be applied to other physical, Division has been defined in several contexts, such as for the real and complex numbers and for more abstract contexts such as for vector spaces and fields. Division is the most mentally difficult of the four operations of arithmetic. Teaching the objective concept of dividing integers introduces students to the arithmetic of fractions, unlike addition, subtraction, and multiplication, the set of all integers is not closed under division. Dividing two integers may result in a remainder, to complete the division of the remainder, the number system is extended to include fractions or rational numbers as they are more generally called. When students advance to algebra, the theory of division intuited from arithmetic naturally extends to algebraic division of variables, polynomials. Division is often shown in algebra and science by placing the dividend over the divisor with a line, also called a fraction bar. For example, a divided by b is written a b This can be read out loud as a divided by b, a fraction is a division expression where both dividend and divisor are integers, and there is no implication that the division must be evaluated further. A second way to show division is to use the obelus, common in arithmetic, in this manner, ISO 80000-2-9.6 states it should not be used. The obelus is also used alone to represent the operation itself. In some non-English-speaking cultures, a divided by b is written a, b and this notation was introduced in 1631 by William Oughtred in his Clavis Mathematicae and later popularized by Gottfried Wilhelm Leibniz
Division (mathematics)
–
This article is about the arithmetical operation. For other uses, see Division (disambiguation).
34.
Numeral system
–
A numeral system is a writing system for expressing numbers, that is, a mathematical notation for representing numbers of a given set, using digits or other symbols in a consistent manner. It can be seen as the context that allows the symbols 11 to be interpreted as the symbol for three, the decimal symbol for eleven, or a symbol for other numbers in different bases. The number the numeral represents is called its value, ideally, a numeral system will, Represent a useful set of numbers Give every number represented a unique representation Reflect the algebraic and arithmetic structure of the numbers. For example, the decimal representation of whole numbers gives every nonzero whole number a unique representation as a finite sequence of digits. Etc. all of which have the same meaning except for some scientific, such systems are, however, not the topic of this article. The most commonly used system of numerals is the Hindu–Arabic numeral system, two Indian mathematicians are credited with developing it. Aryabhata of Kusumapura developed the notation in the 5th century. The numeral system and the concept, developed by the Hindus in India, slowly spread to other surrounding countries due to their commercial. The Arabs adopted and modified it, even today, the Arabs call the numerals which they use Rakam Al-Hind or the Hindu numeral system. The Arabs translated Hindu texts on numerology and spread them to the world due to their trade links with them. The Western world modified them and called them the Arabic numerals, hence the current western numeral system is the modified version of the Hindu numeral system developed in India. It also exhibits a great similarity to the Sanskrit–Devanagari notation, which is used in India. The simplest numeral system is the numeral system, in which every natural number is represented by a corresponding number of symbols. If the symbol / is chosen, for example, then the seven would be represented by ///////. Tally marks represent one such system still in common use, the unary system is only useful for small numbers, although it plays an important role in theoretical computer science. Elias gamma coding, which is used in data compression. The unary notation can be abbreviated by introducing different symbols for new values. The ancient Egyptian numeral system was of type, and the Roman numeral system was a modification of this idea
Numeral system
–
Numeral systems
35.
Ancient Egypt
–
Ancient Egypt was a civilization of ancient Northeastern Africa, concentrated along the lower reaches of the Nile River in what is now the modern country of Egypt. It is one of six civilizations to arise independently, Egyptian civilization followed prehistoric Egypt and coalesced around 3150 BC with the political unification of Upper and Lower Egypt under the first pharaoh Narmer. In the aftermath of Alexander the Greats death, one of his generals, Ptolemy Soter and this Greek Ptolemaic Kingdom ruled Egypt until 30 BC, when, under Cleopatra, it fell to the Roman Empire and became a Roman province. The success of ancient Egyptian civilization came partly from its ability to adapt to the conditions of the Nile River valley for agriculture, the predictable flooding and controlled irrigation of the fertile valley produced surplus crops, which supported a more dense population, and social development and culture. Its art and architecture were widely copied, and its antiquities carried off to far corners of the world and its monumental ruins have inspired the imaginations of travelers and writers for centuries. The Nile has been the lifeline of its region for much of human history, nomadic modern human hunter-gatherers began living in the Nile valley through the end of the Middle Pleistocene some 120,000 years ago. By the late Paleolithic period, the climate of Northern Africa became increasingly hot and dry. In Predynastic and Early Dynastic times, the Egyptian climate was less arid than it is today. Large regions of Egypt were covered in treed savanna and traversed by herds of grazing ungulates, foliage and fauna were far more prolific in all environs and the Nile region supported large populations of waterfowl. Hunting would have been common for Egyptians, and this is also the period when many animals were first domesticated. The largest of these cultures in upper Egypt was the Badari, which probably originated in the Western Desert, it was known for its high quality ceramics, stone tools. The Badari was followed by the Amratian and Gerzeh cultures, which brought a number of technological improvements, as early as the Naqada I Period, predynastic Egyptians imported obsidian from Ethiopia, used to shape blades and other objects from flakes. In Naqada II times, early evidence exists of contact with the Near East, particularly Canaan, establishing a power center at Hierakonpolis, and later at Abydos, Naqada III leaders expanded their control of Egypt northwards along the Nile. They also traded with Nubia to the south, the oases of the desert to the west. Royal Nubian burials at Qustul produced artifacts bearing the oldest-known examples of Egyptian dynastic symbols, such as the crown of Egypt. They also developed a ceramic glaze known as faience, which was used well into the Roman Period to decorate cups, amulets, and figurines. During the last predynastic phase, the Naqada culture began using written symbols that eventually were developed into a system of hieroglyphs for writing the ancient Egyptian language. The Early Dynastic Period was approximately contemporary to the early Sumerian-Akkadian civilisation of Mesopotamia, the third-century BC Egyptian priest Manetho grouped the long line of pharaohs from Menes to his own time into 30 dynasties, a system still used today
Ancient Egypt
–
The Great Sphinx and the pyramids of Giza are among the most recognizable symbols of the civilization of ancient Egypt.
Ancient Egypt
–
A typical Naqada II jar decorated with gazelles. (Predynastic Period)
Ancient Egypt
–
The Narmer Palette depicts the unification of the Two Lands.
36.
Rhind Mathematical Papyrus
–
The Rhind Mathematical Papyrus is one of the best known examples of Egyptian mathematics. It is named after Alexander Henry Rhind, a Scottish antiquarian and it dates to around 1550 BC. It is one of the two well-known Mathematical Papyri along with the Moscow Mathematical Papyrus, the Rhind Papyrus is larger than the Moscow Mathematical Papyrus, while the latter is older than the former. The Rhind Mathematical Papyrus dates to the Second Intermediate Period of Egypt and it was copied by the scribe Ahmes, from a now-lost text from the reign of king Amenemhat III. Written in the script, this Egyptian manuscript is 33 cm tall. The papyrus began to be transliterated and mathematically translated in the late 19th century, the mathematical translation aspect remains incomplete in several respects. The document is dated to Year 33 of the Hyksos king Apophis and also contains a separate later historical note on its verso likely dating from the period of his successor, Khamudi. In the opening paragraphs of the papyrus, Ahmes presents the papyrus as giving Accurate reckoning for inquiring into things, the scribe Ahmose writes this copy. Several books and articles about the Rhind Mathematical Papyrus have been published, a more recent overview of the Rhind Papyrus was published in 1987 by Robins and Shute. The first part of the Rhind papyrus consists of reference tables, the problems start out with simple fractional expressions, followed by completion problems and more involved linear equations. The first part of the papyrus is taken up by the 2/n table, the fractions 2/n for odd n ranging from 3 to 101 are expressed as sums of unit fractions. For example,2 /15 =1 /10 +1 /30. The decomposition of 2/n into unit fractions is never more than 4 terms long as in for example 2 /101 =1 /101 +1 /202 +1 /303 +1 /606. This table is followed by a smaller, tiny table of fractional expressions for the numbers 1 through 9 divided by 10. Problems 1-7, 7B and 8-40 are concerned with arithmetic and elementary algebra, problems 1–6 compute divisions of a certain number of loaves of bread by 10 men and record the outcome in unit fractions. Problems 7–20 show how to multiply the expressions 1 + 1/2 + 1/4 = 7/4 and 1 + 2/3 + 1/3 =2 by different fractions, problems 21–23 are problems in completion, which in modern notation are simply subtraction problems. Problems 24–34 are ‘’aha’’ problems, these are linear equations, problem 32 for instance corresponds to solving x + 1/3 x + 1/4 x =2 for x. Problems 35–38 involve divisions of the heqat, which is an ancient Egyptian unit of volume, problems 39 and 40 compute the division of loaves and use arithmetic progressions
Rhind Mathematical Papyrus
–
A portion of the Rhind Papyrus
Rhind Mathematical Papyrus
–
Building
37.
Muhammad ibn Musa al-Khwarizmi
–
Muḥammad ibn Mūsā al-Khwārizmī, formerly Latinized as Algoritmi, was a Persian mathematician, astronomer, and geographer during the Abbasid Caliphate, a scholar in the House of Wisdom in Baghdad. In the 12th century, Latin translations of his work on the Indian numerals introduced the decimal number system to the Western world. Al-Khwārizmīs The Compendious Book on Calculation by Completion and Balancing presented the first systematic solution of linear and he is often considered one of the fathers of algebra. He revised Ptolemys Geography and wrote on astronomy and astrology, some words reflect the importance of al-Khwārizmīs contributions to mathematics. Algebra is derived from al-jabr, one of the two operations he used to solve quadratic equations, algorism and algorithm stem from Algoritmi, the Latin form of his name. His name is also the origin of guarismo and of algarismo, few details of al-Khwārizmīs life are known with certainty. He was born in a Persian family and Ibn al-Nadim gives his birthplace as Khwarezm in Greater Khorasan, muhammad ibn Jarir al-Tabari gives his name as Muḥammad ibn Musá al-Khwārizmiyy al-Majūsiyy al-Quṭrubbaliyy. The epithet al-Qutrubbulli could indicate he might instead have come from Qutrubbul and this would not be worth mentioning if a series of errors concerning the personality of al-Khwārizmī, occasionally even the origins of his knowledge, had not been made. Recently, G. J. Toomer. with naive confidence constructed an entire fantasy on the error which cannot be denied the merit of amusing the reader. Regarding al-Khwārizmīs religion, Toomer writes, Another epithet given to him by al-Ṭabarī, al-Majūsī, Ibn al-Nadīms Kitāb al-Fihrist includes a short biography on al-Khwārizmī together with a list of the books he wrote. Al-Khwārizmī accomplished most of his work in the period between 813 and 833, douglas Morton Dunlop suggests that it may have been possible that Muḥammad ibn Mūsā al-Khwārizmī was in fact the same person as Muḥammad ibn Mūsā ibn Shākir, the eldest of the three Banū Mūsā. Al-Khwārizmīs contributions to mathematics, geography, astronomy, and cartography established the basis for innovation in algebra, on the Calculation with Hindu Numerals written about 825, was principally responsible for spreading the Hindu–Arabic numeral system throughout the Middle East and Europe. It was translated into Latin as Algoritmi de numero Indorum, al-Khwārizmī, rendered as Algoritmi, led to the term algorithm. Some of his work was based on Persian and Babylonian astronomy, Indian numbers, al-Khwārizmī systematized and corrected Ptolemys data for Africa and the Middle East. Another major book was Kitab surat al-ard, presenting the coordinates of places based on those in the Geography of Ptolemy but with improved values for the Mediterranean Sea, Asia and he also wrote on mechanical devices like the astrolabe and sundial. He assisted a project to determine the circumference of the Earth and in making a map for al-Mamun. When, in the 12th century, his works spread to Europe through Latin translations, the Compendious Book on Calculation by Completion and Balancing is a mathematical book written approximately 830 CE. The term algebra is derived from the name of one of the operations with equations described in this book
Muhammad ibn Musa al-Khwarizmi
–
A page from al-Khwārizmī's Algebra
Muhammad ibn Musa al-Khwarizmi
–
A stamp issued September 6, 1983 in the Soviet Union, commemorating al-Khwārizmī's (approximate) 1200th birthday.
Muhammad ibn Musa al-Khwarizmi
–
A 15th-century version of Ptolemy 's Geography for comparison.
38.
Latin language
–
Latin is a classical language belonging to the Italic branch of the Indo-European languages. The Latin alphabet is derived from the Etruscan and Greek alphabets, Latin was originally spoken in Latium, in the Italian Peninsula. Through the power of the Roman Republic, it became the dominant language, Vulgar Latin developed into the Romance languages, such as Italian, Portuguese, Spanish, French, and Romanian. Latin, Italian and French have contributed many words to the English language, Latin and Ancient Greek roots are used in theology, biology, and medicine. By the late Roman Republic, Old Latin had been standardised into Classical Latin, Vulgar Latin was the colloquial form spoken during the same time and attested in inscriptions and the works of comic playwrights like Plautus and Terence. Late Latin is the language from the 3rd century. Later, Early Modern Latin and Modern Latin evolved, Latin was used as the language of international communication, scholarship, and science until well into the 18th century, when it began to be supplanted by vernaculars. Ecclesiastical Latin remains the language of the Holy See and the Roman Rite of the Catholic Church. Today, many students, scholars and members of the Catholic clergy speak Latin fluently and it is taught in primary, secondary and postsecondary educational institutions around the world. The language has been passed down through various forms, some inscriptions have been published in an internationally agreed, monumental, multivolume series, the Corpus Inscriptionum Latinarum. Authors and publishers vary, but the format is about the same, volumes detailing inscriptions with a critical apparatus stating the provenance, the reading and interpretation of these inscriptions is the subject matter of the field of epigraphy. The works of several hundred ancient authors who wrote in Latin have survived in whole or in part and they are in part the subject matter of the field of classics. The Cat in the Hat, and a book of fairy tales, additional resources include phrasebooks and resources for rendering everyday phrases and concepts into Latin, such as Meissners Latin Phrasebook. The Latin influence in English has been significant at all stages of its insular development. From the 16th to the 18th centuries, English writers cobbled together huge numbers of new words from Latin and Greek words, dubbed inkhorn terms, as if they had spilled from a pot of ink. Many of these words were used once by the author and then forgotten, many of the most common polysyllabic English words are of Latin origin through the medium of Old French. Romance words make respectively 59%, 20% and 14% of English, German and those figures can rise dramatically when only non-compound and non-derived words are included. Accordingly, Romance words make roughly 35% of the vocabulary of Dutch, Roman engineering had the same effect on scientific terminology as a whole
Latin language
–
Latin inscription, in the Colosseum
Latin language
–
Julius Caesar 's Commentarii de Bello Gallico is one of the most famous classical Latin texts of the Golden Age of Latin. The unvarnished, journalistic style of this patrician general has long been taught as a model of the urbane Latin officially spoken and written in the floruit of the Roman republic.
Latin language
–
A multi-volume Latin dictionary in the University Library of Graz
Latin language
–
Latin and Ancient Greek Language - Culture - Linguistics at Duke University in 2014.
39.
Saint Augustine
–
Augustine of Hippo was an early Christian theologian and philosopher whose writings influenced the development of Western Christianity and Western philosophy. He was the bishop of Hippo Regius, located in Numidia, Augustine is viewed as one of the most important Church Fathers in Western Christianity for his writings in the Patristic Era. Among his most important works are The City of God and Confessions, according to his contemporary, Jerome, Augustine established anew the ancient Faith. In his early years, he was influenced by Manichaeism. After his baptism and conversion to Christianity in 386, Augustine developed his own approach to philosophy and theology, accommodating a variety of methods and perspectives. Believing that the grace of Christ was indispensable to human freedom, he helped formulate the doctrine of original sin, when the Western Roman Empire began to disintegrate, Augustine developed the concept of the Church as a spiritual City of God, distinct from the material Earthly City. His thoughts profoundly influenced the medieval worldview, the segment of the Church that adhered to the concept of the Trinity as defined by the Council of Nicaea and the Council of Constantinople closely identified with Augustines On the Trinity. Augustine is recognized as a saint in the Catholic Church, the Eastern Christian Church, and he is also the patron of the Augustinians. His memorial is celebrated on 28 August, the day of his death, Augustine is the patron saint of brewers, printers, theologians, the alleviation of sore eyes, and a number of cities and dioceses. Many Protestants, especially Calvinists and Lutherans, consider him to be one of the fathers of the Protestant Reformation due to his teachings on salvation. Lutherans, and Martin Luther in particular, have held Augustine in preeminence, Luther himself was a member of the Order of the Augustinian Eremites. In the East, some of his teachings are disputed and have in the 20th century in particular come under attack by such theologians as John Romanides, but other theologians and figures of the Eastern Orthodox Church have shown significant appropriation of his writings, chiefly Georges Florovsky. The most controversial doctrine surrounding his name is the filioque, which has been rejected by the Orthodox Church, other disputed teachings include his views on original sin, the doctrine of grace, and predestination. Nevertheless, though considered to be mistaken on some points, he is considered a saint. In the Orthodox Church his feast day is celebrated on 28 August and he was an early Christian theologian and philosopher whose writings influenced the development of Western Christianity and Western philosophy. Augustine was the bishop of Hippo Regius, located in Numidia and he is viewed as one of the most important Church Fathers in Western Christianity for his writings in the Patristic Era. Among his most important works are The City of God and Confessions, Augustine was born in the year 354 AD in the municipium of Thagaste in Roman Africa. His mother, Monica or Monnica, was a devout Christian, in his writings, Augustine leaves some information as to the consciousness of his African heritage
Saint Augustine
–
Saint Augustine from a 19th-century engraving
Saint Augustine
–
The Saint Augustine Taken to School by Saint Monica. by Niccolò di Pietro 1413-15
Saint Augustine
–
The earliest known portrait of Saint Augustine in a 6th-century fresco, Lateran, Rome.
Saint Augustine
–
Angelico, Fra. "The Conversion of St. Augustine" (painting).
40.
Physics
–
Physics is the natural science that involves the study of matter and its motion and behavior through space and time, along with related concepts such as energy and force. One of the most fundamental disciplines, the main goal of physics is to understand how the universe behaves. Physics is one of the oldest academic disciplines, perhaps the oldest through its inclusion of astronomy, Physics intersects with many interdisciplinary areas of research, such as biophysics and quantum chemistry, and the boundaries of physics are not rigidly defined. New ideas in physics often explain the mechanisms of other sciences while opening new avenues of research in areas such as mathematics. Physics also makes significant contributions through advances in new technologies that arise from theoretical breakthroughs, the United Nations named 2005 the World Year of Physics. Astronomy is the oldest of the natural sciences, the stars and planets were often a target of worship, believed to represent their gods. While the explanations for these phenomena were often unscientific and lacking in evidence, according to Asger Aaboe, the origins of Western astronomy can be found in Mesopotamia, and all Western efforts in the exact sciences are descended from late Babylonian astronomy. The most notable innovations were in the field of optics and vision, which came from the works of many scientists like Ibn Sahl, Al-Kindi, Ibn al-Haytham, Al-Farisi and Avicenna. The most notable work was The Book of Optics, written by Ibn Al-Haitham, in which he was not only the first to disprove the ancient Greek idea about vision, but also came up with a new theory. In the book, he was also the first to study the phenomenon of the pinhole camera, many later European scholars and fellow polymaths, from Robert Grosseteste and Leonardo da Vinci to René Descartes, Johannes Kepler and Isaac Newton, were in his debt. Indeed, the influence of Ibn al-Haythams Optics ranks alongside that of Newtons work of the same title, the translation of The Book of Optics had a huge impact on Europe. From it, later European scholars were able to build the devices as what Ibn al-Haytham did. From this, such important things as eyeglasses, magnifying glasses, telescopes, Physics became a separate science when early modern Europeans used experimental and quantitative methods to discover what are now considered to be the laws of physics. Newton also developed calculus, the study of change, which provided new mathematical methods for solving physical problems. The discovery of new laws in thermodynamics, chemistry, and electromagnetics resulted from greater research efforts during the Industrial Revolution as energy needs increased, however, inaccuracies in classical mechanics for very small objects and very high velocities led to the development of modern physics in the 20th century. Modern physics began in the early 20th century with the work of Max Planck in quantum theory, both of these theories came about due to inaccuracies in classical mechanics in certain situations. Quantum mechanics would come to be pioneered by Werner Heisenberg, Erwin Schrödinger, from this early work, and work in related fields, the Standard Model of particle physics was derived. Areas of mathematics in general are important to this field, such as the study of probabilities, in many ways, physics stems from ancient Greek philosophy
Physics
–
Further information: Outline of physics
Physics
–
Ancient Egyptian astronomy is evident in monuments like the ceiling of Senemut's tomb from the Eighteenth Dynasty of Egypt.
Physics
–
Sir Isaac Newton (1643–1727), whose laws of motion and universal gravitation were major milestones in classical physics
Physics
–
Albert Einstein (1879–1955), whose work on the photoelectric effect and the theory of relativity led to a revolution in 20th century physics
41.
Italians
–
Italians are a nation and ethnic group native to Italy who share a common culture, ancestry and speak the Italian language as a native tongue. The majority of Italian nationals are speakers of Standard Italian. Italians have greatly influenced and contributed to the arts and music, science, technology, cuisine, sports, fashion, jurisprudence, banking, Italian people are generally known for their localism and their attention to clothing and family values. The term Italian is at least 3,000 years old and has a history that goes back to pre-Roman Italy. According to one of the common explanations, the term Italia, from Latin, Italia, was borrowed through Greek from the Oscan Víteliú. The bull was a symbol of the southern Italic tribes and was often depicted goring the Roman wolf as a defiant symbol of free Italy during the Social War. Greek historian Dionysius of Halicarnassus states this account together with the legend that Italy was named after Italus, mentioned also by Aristotle and Thucydides. The Etruscan civilization reached its peak about the 7th century BC, but by 509 BC, when the Romans overthrew their Etruscan monarchs, its control in Italy was on the wane. By 350 BC, after a series of wars between Greeks and Etruscans, the Latins, with Rome as their capital, gained the ascendancy by 272 BC, and they managed to unite the entire Italian peninsula. This period of unification was followed by one of conquest in the Mediterranean, in the course of the century-long struggle against Carthage, the Romans conquered Sicily, Sardinia and Corsica. Finally, in 146 BC, at the conclusion of the Third Punic War, with Carthage completely destroyed and its inhabitants enslaved, octavian, the final victor, was accorded the title of Augustus by the Senate and thereby became the first Roman emperor. After two centuries of rule, in the 3rd century AD, Rome was threatened by internal discord and menaced by Germanic and Asian invaders. Emperor Diocletians administrative division of the empire into two parts in 285 provided only temporary relief, it became permanent in 395, in 313, Emperor Constantine accepted Christianity, and churches thereafter rose throughout the empire. However, he moved his capital from Rome to Constantinople. The last Western emperor, Romulus Augustulus, was deposed in 476 by a Germanic foederati general in Italy and his defeat marked the end of the western part of the Roman Empire. During most of the period from the fall of Rome until the Kingdom of Italy was established in 1861, Odoacer ruled well for 13 years after gaining control of Italy in 476. Then he was attacked and defeated by Theodoric, the king of another Germanic tribe, Theodoric and Odoacer ruled jointly until 493, when Theodoric murdered Odoacer. Theodoric continued to rule Italy with an army of Ostrogoths and a government that was mostly Italian, after the death of Theodoric in 526, the kingdom began to grow weak
Italians
–
Amerigo Vespucci, the notable geographer and traveller from whose name the word America is derived.
Italians
Italians
–
Christopher Columbus, the discoverer of the New World.
Italians
–
Laura Bassi, the first chairwoman of a university in a scientific field of studies.
42.
Projective geometry
–
Projective geometry is a topic of mathematics. It is the study of properties that are invariant with respect to projective transformations. This means that, compared to elementary geometry, projective geometry has a different setting, projective space, properties meaningful for projective geometry are respected by this new idea of transformation, which is more radical in its effects than expressible by a transformation matrix and translations. The first issue for geometers is what kind of geometry is adequate for a novel situation, one source for projective geometry was indeed the theory of perspective. Another difference from elementary geometry is the way in which parallel lines can be said to meet in a point at infinity, again this notion has an intuitive basis, such as railway tracks meeting at the horizon in a perspective drawing. See projective plane for the basics of geometry in two dimensions. While the ideas were available earlier, projective geometry was mainly a development of the 19th century and this included the theory of complex projective space, the coordinates used being complex numbers. Several major types of more abstract mathematics were based on projective geometry and it was also a subject with a large number of practitioners for its own sake, as synthetic geometry. Another topic that developed from axiomatic studies of projective geometry is finite geometry, the topic of projective geometry is itself now divided into many research subtopics, two examples of which are projective algebraic geometry and projective differential geometry. Projective geometry is an elementary form of geometry, meaning that it is not based on a concept of distance. In two dimensions it begins with the study of configurations of points and lines and that there is indeed some geometric interest in this sparse setting was first established by Desargues and others in their exploration of the principles of perspective art. In higher dimensional spaces there are considered hyperplanes, and other linear subspaces, Projective geometry can also be seen as a geometry of constructions with a straight-edge alone. Since projective geometry excludes compass constructions, there are no circles, no angles, no measurements, no parallels and it was realised that the theorems that do apply to projective geometry are simpler statements. For example, the different conic sections are all equivalent in projective geometry, during the early 19th century the work of Jean-Victor Poncelet, Lazare Carnot and others established projective geometry as an independent field of mathematics. Its rigorous foundations were addressed by Karl von Staudt and perfected by Italians Giuseppe Peano, Mario Pieri, Alessandro Padoa, after much work on the very large number of theorems in the subject, therefore, the basics of projective geometry became understood. The incidence structure and the cross-ratio are fundamental invariants under projective transformations, Projective geometry can be modeled by the affine plane plus a line at infinity and then treating that line as ordinary. An algebraic model for doing projective geometry in the style of geometry is given by homogeneous coordinates. In a foundational sense, projective geometry and ordered geometry are elementary since they involve a minimum of axioms and either can be used as the foundation for affine, Projective geometry is not ordered and so it is a distinct foundation for geometry
Projective geometry
–
Growth measure and the polar vortices. Based on the work of Lawrence Edwards
Projective geometry
–
Projecting a sphere to a plane.
Projective geometry
–
Forms
43.
Principia Mathematica
–
The Principia Mathematica is a three-volume work on the foundations of mathematics, written by Alfred North Whitehead and Bertrand Russell and published in 1910,1912, and 1913. In 1925–27, it appeared in an edition with an important Introduction to the Second Edition. PM was an attempt to describe a set of axioms and inference rules in symbolic logic from which all mathematical truths could in principle be proven. As such, this project is of great importance in the history of mathematics and philosophy. One of the inspirations and motivations for PM was the earlier work of Gottlob Frege on logic. PM sought to avoid this problem by ruling out the creation of arbitrary sets. This was achieved by replacing the notion of a set with the notion of a hierarchy of sets of different types. Contemporary mathematics, however, avoids paradoxes such as Russells in less unwieldy ways, PM is not to be confused with Russells 1903 The Principles of Mathematics. PM states, The present work was intended by us to be comprised in a second volume of Principles of Mathematics. PM has long known for its typographical complexity. Famously, several hundred pages are required in PM to prove the validity of the proposition 1+1=2, the Modern Library placed it 23rd in a list of the top 100 English-language nonfiction books of the twentieth century. The Principia covered only set theory, cardinal numbers, ordinal numbers and it was also clear how lengthy such a development would be. A fourth volume on the foundations of geometry had been planned, as noted in the criticism of the theory by Kurt Gödel, unlike a formalist theory, the logicistic theory of PM has no precise statement of the syntax of the formalism. A raw formalist theory would not provide the meaning of the symbols that form a primitive proposition—the symbols themselves could be absolutely arbitrary, the theory would specify only how the symbols behave based on the grammar of the theory. Then later, by assignment of values, a model would specify an interpretation of what the formulas are saying. Thus in the formal Kleene symbol set below, the interpretation of what the symbols commonly mean, but this is not a pure Formalist theory. The following formalist theory is offered as contrast to the theory of PM. A contemporary formal system would be constructed as follows, Symbols used, This set is the starting set, symbol strings, The theory will build strings of these symbols by concatenation
Principia Mathematica
–
The title page of the shortened version of the Principia Mathematica to *56
44.
Alfred North Whitehead
–
Alfred North Whitehead OM FRS was an English mathematician and philosopher. In his early career Whitehead wrote primarily on mathematics, logic and his most notable work in these fields is the three-volume Principia Mathematica, which he wrote with former student Bertrand Russell. Beginning in the late 1910s and early 1920s, Whitehead gradually turned his attention from mathematics to philosophy of science and he developed a comprehensive metaphysical system which radically departed from most of western philosophy. Today Whiteheads philosophical works – particularly Process and Reality – are regarded as the texts of process philosophy. For this reason, one of the most promising applications of Whiteheads thought in recent years has been in the area of ecological civilization, cobb, Jr. Alfred North Whitehead was born in Ramsgate, Kent, England, in 1861. His father, Alfred Whitehead, was a minister and schoolmaster of Chatham House Academy, Whitehead himself recalled both of them as being very successful schoolmasters, but that his grandfather was the more extraordinary man. Whiteheads mother was Maria Sarah Whitehead, formerly Maria Sarah Buckmaster, Whitehead was apparently not particularly close with his mother, as he never mentioned her in any of his writings, and there is evidence that Whiteheads wife, Evelyn, had a low opinion of her. Whitehead was educated at Sherborne School, Dorset, then considered one of the best public schools in the country and his childhood was described as over-protected, but when at school he excelled in sports and mathematics and was head prefect of his class. In 1880, Whitehead began attending Trinity College, Cambridge, and his academic advisor was Edward John Routh. He earned his BA from Trinity in 1884, and graduated as fourth wrangler, in 1890, Whitehead married Evelyn Wade, an Irish woman raised in France, they had a daughter, Jessie Whitehead, and two sons, Thomas North Whitehead and Eric Whitehead. Eric Whitehead died in action serving in the Royal Flying Corps during World War I at the age of 19. In 1910, Whitehead resigned his Senior Lectureship in Mathematics at Trinity, toward the end of his time in England, Whitehead turned his attention to philosophy. Though he had no advanced training in philosophy, his work soon became highly regarded. After publishing The Concept of Nature in 1920, he served as president of the Aristotelian Society from 1922 to 1923, in 1924, Henry Osborn Taylor invited the 63-year-old Whitehead to join the faculty at Harvard University as a professor of philosophy. During his time at Harvard, Whitehead produced his most important philosophical contributions, in 1925, he wrote Science and the Modern World, which was immediately hailed as an alternative to the Cartesian dualism that plagued popular science. A few years later, he published his seminal work Process and Reality, the Whiteheads spent the rest of their lives in the United States. Alfred North retired from Harvard in 1937 and remained in Cambridge, the two volume biography of Whitehead by Victor Lowe is the most definitive presentation of the life of Whitehead. However, many details of Whiteheads life remain obscure because he left no Nachlass, additionally, Whitehead was known for his almost fanatical belief in the right to privacy, and for writing very few personal letters of the kind that would help to gain insight on his life
Alfred North Whitehead
–
Alfred North Whitehead
Alfred North Whitehead
–
Whewell's Court north range at Trinity College, Cambridge. Whitehead spent thirty years at Trinity, five as a student and twenty-five as a senior lecturer.
Alfred North Whitehead
–
Bertrand Russell in 1907. Russell was a student of Whitehead's at Trinity College, and a longtime collaborator and friend.
Alfred North Whitehead
–
The title page of the shortened version of the Principia Mathematica to *56
45.
L.E.J. Brouwer
–
He was the founder of the mathematical philosophy of intuitionism. Early in his career, Brouwer proved a number of theorems that were in the field of topology. The main results were his fixed point theorem, the invariance of degree. The most popular of the three among mathematicians is the first one called the Brouwer Fixed Point Theorem and it is a simple corollary to the second, about the topological invariance of degree, and this one is the most popular among algebraic topologists. The third is perhaps the hardest, in 1912, at age 31, he was elected a member of the Royal Netherlands Academy of Arts and Sciences. As a variety of mathematics, intuitionism is essentially a philosophy of the foundations of mathematics. It is sometimes and rather simplistically characterized by saying that its adherents refuse to use the law of excluded middle in mathematical reasoning, Brouwer was a member of the Significs group. It formed part of the history of semiotics—the study of symbols—around Victoria. The original meaning of his intuitionism probably can not be completely disentangled from the milieu of that group. In 1905, at the age of 24, Brouwer expressed his philosophy of life in a short tract Life, Art, arthur Schopenhauer had a formative influence on Brouwer, not least because he insisted that all concepts be fundamentally based on sense intuitions. All interwoven with some kind of pessimism and mystical attitude to life which is not mathematics and it was then that Brouwer felt free to return to his revolutionary project which he was now calling intuitionism. He was combative for a young man and he was involved in a very public and eventually demeaning controversy in the later 1920s with Hilbert over editorial policy at Mathematische Annalen, at that time a leading learned journal. He became relatively isolated, the development of intuitionism at its source was taken up by his student Arend Heyting and he was killed in 1966 at the age of 85, struck by a vehicle while crossing the street in front of his house. Jean van Heijenoort,1967 3rd printing 1976 with corrections, A Source Book in Mathematical Logic, harvard University Press, Cambridge MA, ISBN 0-674-32449-8 pbk. The original papers are prefaced with valuable commentary, L. E. J. Brouwer, On the significance of the principle of excluded middle in mathematics, especially in function theory. With two Addenda and corrigenda, 334-45, a. N. Kolmogorov, On the principle of excluded middle, pp. 414–437. Kolmogorov supports most of Brouwers results but disputes a few, he discusses the ramifications of intuitionism with respect to transfinite judgements, L. E. J. Brouwer, On the domains of definition of functions. Brouwers intuitionistic treatment of the continuum, with an extended commentary, david Hilbert, The foundations of mathematics, 464-801927
L.E.J. Brouwer
–
L. E. J. Brouwer
46.
German language
–
German is a West Germanic language that is mainly spoken in Central Europe. It is the most widely spoken and official language in Germany, Austria, Switzerland, South Tyrol, the German-speaking Community of Belgium and it is also one of the three official languages of Luxembourg. Major languages which are most similar to German include other members of the West Germanic language branch, such as Afrikaans, Dutch, English, Luxembourgish and it is the second most widely spoken Germanic language, after English. One of the languages of the world, German is the first language of about 95 million people worldwide. The German speaking countries are ranked fifth in terms of publication of new books. German derives most of its vocabulary from the Germanic branch of the Indo-European language family, a portion of German words are derived from Latin and Greek, and fewer are borrowed from French and English. With slightly different standardized variants, German is a pluricentric language, like English, German is also notable for its broad spectrum of dialects, with many unique varieties existing in Europe and also other parts of the world. The history of the German language begins with the High German consonant shift during the migration period, when Martin Luther translated the Bible, he based his translation primarily on the standard bureaucratic language used in Saxony, also known as Meißner Deutsch. Copies of Luthers Bible featured a long list of glosses for each region that translated words which were unknown in the region into the regional dialect. Roman Catholics initially rejected Luthers translation, and tried to create their own Catholic standard of the German language – the difference in relation to Protestant German was minimal. It was not until the middle of the 18th century that a widely accepted standard was created, until about 1800, standard German was mainly a written language, in urban northern Germany, the local Low German dialects were spoken. Standard German, which was different, was often learned as a foreign language with uncertain pronunciation. Northern German pronunciation was considered the standard in prescriptive pronunciation guides though, however, German was the language of commerce and government in the Habsburg Empire, which encompassed a large area of Central and Eastern Europe. Until the mid-19th century, it was essentially the language of townspeople throughout most of the Empire and its use indicated that the speaker was a merchant or someone from an urban area, regardless of nationality. Some cities, such as Prague and Budapest, were gradually Germanized in the years after their incorporation into the Habsburg domain, others, such as Pozsony, were originally settled during the Habsburg period, and were primarily German at that time. Prague, Budapest and Bratislava as well as cities like Zagreb, the most comprehensive guide to the vocabulary of the German language is found within the Deutsches Wörterbuch. This dictionary was created by the Brothers Grimm and is composed of 16 parts which were issued between 1852 and 1860, in 1872, grammatical and orthographic rules first appeared in the Duden Handbook. In 1901, the 2nd Orthographical Conference ended with a standardization of the German language in its written form
German language
–
Old Frisian (Alt-Friesisch)
German language
–
The widespread popularity of the Bible translated into German by Martin Luther helped establish modern German
German language
–
Examples of German language in Namibian everyday life
German language
–
German-language newspapers in the U.S. in 1922
47.
Organon
–
The Organon is the standard collection of Aristotles six works on logic. The name Organon was given by Aristotles followers, the Peripatetics and they are as follows, The order of the works is not chronological but was deliberately chosen by Theophrastus to constitute a well-structured system. Indeed, parts of them seem to be a scheme of a lecture on logic, the arrangement of the works was made by Andronicus of Rhodes around 40 BC. The Categories introduces Aristotles 10-fold classification of that exists, substance, quantity, quality, relation, place, time, situation, condition, action. On Interpretation introduces Aristotles conception of proposition and judgment, and the relations between affirmative, negative, universal, and particular propositions. Aristotle discusses the square of opposition or square of Apuleius in Chapter 7, Chapter 9 deals with the problem of future contingents. The Prior Analytics introduces his syllogistic method, argues for its correctness, the Posterior Analytics deals with demonstration, definition, and scientific knowledge. The Topics treats issues in constructing valid arguments, and inference that is probable and it is in this treatise that Aristotle mentions the Predicables, later discussed by Porphyry and the scholastic logicians. The Sophistical Refutations gives a treatment of logical fallacies, and provides a key link to Aristotles work on rhetoric, the Organon was used in the school founded by Aristotle at the Lyceum, and some parts of the works seem to be a scheme of a lecture on logic. So much so that after Aristotles death, his publishers collected these works, following the collapse of the Western Roman Empire in the fifth century, much of Aristotles work was lost in the Latin West. The Categories and On Interpretation are the only significant logical works that were available in the early Middle Ages and these had been translated into Latin by Boethius. The other logical works were not available in Western Christendom until translated to Latin in the 12th century, however, the original Greek texts had been preserved in the Greek-speaking lands of the Eastern Roman Empire. In the mid-twelfth century, James of Venice translated into Latin the Posterior Analytics from Greek manuscripts found in Constantinople. The books of Aristotle were available in the early Arab Empire, all the major scholastic philosophers wrote commentaries on the Organon. Aquinas, Ockham and Scotus wrote commentaries on On Interpretation, Ockham and Scotus wrote commentaries on the Categories and Sophistical Refutations. Grosseteste wrote a commentary on the Posterior Analytics. During this period, while the logic certainly was based on that of Aristotle, there was a tendency in this period to regard the logical systems of the day to be complete, which in turn no doubt stifled innovation in this area. However Francis Bacon published his Novum Organum as an attack in 1620
Organon
–
Aristotelianism
48.
Chemistry
–
Chemistry is a branch of physical science that studies the composition, structure, properties and change of matter. Chemistry is sometimes called the science because it bridges other natural sciences, including physics. For the differences between chemistry and physics see comparison of chemistry and physics, the history of chemistry can be traced to alchemy, which had been practiced for several millennia in various parts of the world. The word chemistry comes from alchemy, which referred to a set of practices that encompassed elements of chemistry, metallurgy, philosophy, astrology, astronomy, mysticism. An alchemist was called a chemist in popular speech, and later the suffix -ry was added to this to describe the art of the chemist as chemistry, the modern word alchemy in turn is derived from the Arabic word al-kīmīā. In origin, the term is borrowed from the Greek χημία or χημεία and this may have Egyptian origins since al-kīmīā is derived from the Greek χημία, which is in turn derived from the word Chemi or Kimi, which is the ancient name of Egypt in Egyptian. Alternately, al-kīmīā may derive from χημεία, meaning cast together, in retrospect, the definition of chemistry has changed over time, as new discoveries and theories add to the functionality of the science. The term chymistry, in the view of noted scientist Robert Boyle in 1661, in 1837, Jean-Baptiste Dumas considered the word chemistry to refer to the science concerned with the laws and effects of molecular forces. More recently, in 1998, Professor Raymond Chang broadened the definition of chemistry to mean the study of matter, early civilizations, such as the Egyptians Babylonians, Indians amassed practical knowledge concerning the arts of metallurgy, pottery and dyes, but didnt develop a systematic theory. Greek atomism dates back to 440 BC, arising in works by such as Democritus and Epicurus. In 50 BC, the Roman philosopher Lucretius expanded upon the theory in his book De rerum natura, unlike modern concepts of science, Greek atomism was purely philosophical in nature, with little concern for empirical observations and no concern for chemical experiments. Work, particularly the development of distillation, continued in the early Byzantine period with the most famous practitioner being the 4th century Greek-Egyptian Zosimos of Panopolis. He formulated Boyles law, rejected the four elements and proposed a mechanistic alternative of atoms. Before his work, though, many important discoveries had been made, the Scottish chemist Joseph Black and the Dutchman J. B. English scientist John Dalton proposed the theory of atoms, that all substances are composed of indivisible atoms of matter. Davy discovered nine new elements including the alkali metals by extracting them from their oxides with electric current, british William Prout first proposed ordering all the elements by their atomic weight as all atoms had a weight that was an exact multiple of the atomic weight of hydrogen. The inert gases, later called the noble gases were discovered by William Ramsay in collaboration with Lord Rayleigh at the end of the century, thereby filling in the basic structure of the table. Organic chemistry was developed by Justus von Liebig and others, following Friedrich Wöhlers synthesis of urea which proved that organisms were, in theory
Chemistry
–
Solutions of substances in reagent bottles, including ammonium hydroxide and nitric acid, illuminated in different colors
Chemistry
–
Democritus ' atomist philosophy was later adopted by Epicurus (341–270 BCE).
Chemistry
–
Antoine-Laurent de Lavoisier is considered the "Father of Modern Chemistry".
Chemistry
–
Laboratory, Institute of Biochemistry, University of Cologne.
49.
Falsifiability
–
Falsifiability or refutability of a statement, hypothesis, or theory is the inherent possibility that it can be proven false. A statement is called if it is possible to conceive of an observation or an argument which negates the statement in question. In this sense, falsify is synonymous with nullify, meaning to invalidate or shown to be false, thus, the term falsifiability is sometimes synonymous to testability. Some statements, such as It will be raining here in one years, are falsifiable in principle. The concern with falsifiability gained attention by way of philosopher of science Karl Poppers scientific epistemology falsificationism, the classical view of the philosophy of science is that it is the goal of science to prove hypotheses like All swans are white or to induce them from observational data. Popper argued that this would require the inference of a rule from a number of individual cases. However, if one finds one single swan that is not white, Falsificationism thus strives for questioning, for falsification, of hypotheses instead of proving them. For a statement to be questioned using observation, it needs to be at least theoretically possible that it can come into conflict with observation. A key observation of falsificationism is thus that a criterion of demarcation is needed to distinguish those statements that can come into conflict with observation, Popper chose falsifiability as the name of this criterion. My proposal is based upon an asymmetry between verifiability and falsifiability, an asymmetry which results from the form of universal statements. For these are never derivable from singular statements, but can be contradicted by singular statements, Popper stressed that unfalsifiable statements are important in science. Contrary to intuition, unfalsifiable statements can be embedded in —, for example, while all men are mortal is unfalsifiable, it is a logical consequence of the falsifiable theory that every man dies before he reaches the age of 150 years. Similarly, the ancient metaphysical and unfalsifiable idea of the existence of atoms has led to corresponding falsifiable modern theories, Popper invented the notion of metaphysical research programs to name such unfalsifiable ideas. Criticizability, in contrast to falsifiability, and thus rationality, may be comprehensive, though this claim is controversial, even proponents of Poppers philosophy. In work beginning in the 1930s, Popper gave falsifiability a renewed emphasis as a criterion of empirical statements in science, Popper noticed that two types of statements are of particular value to scientists. The first are statements of observations, such as there is a white swan, logicians call these statements singular existential statements, since they assert the existence of some particular thing. They are equivalent to a predicate calculus statement of the form, There exists an x such that x is a swan, the second are statements that categorize all instances of something, such as all swans are white. They are usually parsed in the form, For all x, if x is a swan, Scientific laws are commonly supposed to be of this type
Falsifiability
–
Are all swans white?
50.
Hypothesis
–
A hypothesis is a proposed explanation for a phenomenon. For a hypothesis to be a scientific hypothesis, the method requires that one can test it. Scientists generally base scientific hypotheses on previous observations that cannot satisfactorily be explained with the scientific theories. Even though the hypothesis and theory are often used synonymously. A working hypothesis is a provisionally accepted hypothesis proposed for further research, P is the assumption in a What If question. Remember, the way that you prove an implication is by assuming the hypothesis, --Philip Wadler In its ancient usage, hypothesis referred to a summary of the plot of a classical drama. The English word hypothesis comes from the ancient Greek ὑπόθεσις word hupothesis, in Platos Meno, Socrates dissects virtue with a method used by mathematicians, that of investigating from a hypothesis. In this sense, hypothesis refers to an idea or to a convenient mathematical approach that simplifies cumbersome calculations. In common usage in the 21st century, a hypothesis refers to an idea whose merit requires evaluation. For proper evaluation, the framer of a hypothesis needs to define specifics in operational terms, a hypothesis requires more work by the researcher in order to either confirm or disprove it. In due course, a hypothesis may become part of a theory or occasionally may grow to become a theory itself. Normally, scientific hypotheses have the form of a mathematical model, in entrepreneurial science, a hypothesis is used to formulate provisional ideas within a business setting. The formulated hypothesis is then evaluated where either the hypothesis is proven to be true or false through a verifiability- or falsifiability-oriented Experiment, any useful hypothesis will enable predictions by reasoning. It might predict the outcome of an experiment in a setting or the observation of a phenomenon in nature. The prediction may also invoke statistics and only talk about probabilities, other philosophers of science have rejected the criterion of falsifiability or supplemented it with other criteria, such as verifiability or coherence. The scientific method involves experimentation, to test the ability of some hypothesis to adequately answer the question under investigation. In contrast, unfettered observation is not as likely to raise unexplained issues or open questions in science, a thought experiment might also be used to test the hypothesis as well. In framing a hypothesis, the investigator must not currently know the outcome of a test or that it remains reasonably under continuing investigation, only in such cases does the experiment, test or study potentially increase the probability of showing the truth of a hypothesis
Hypothesis
–
Andreas Cellarius hypothesis, demonstrating the planetary motions in eccentric and epicyclical orbits
51.
Imre Lakatos
–
Lakatos was born Imre Lipschitz to a Jewish family in Debrecen, Hungary in 1922. He received a degree in mathematics, physics, and philosophy from the University of Debrecen in 1944, in March 1944 the Germans invaded Hungary and Lakatos along with Éva Révész, his then-girlfriend and subsequent wife, formed soon after that event a Marxist resistance group. In May of that year, the group was joined by Éva Izsák, Lakatos, considering that there was a risk that she would be captured and forced to betray them, decided that her duty to the group was to commit suicide. Subsequently, a member of the group took her to Debrecen, during the occupation, Lakatos avoided Nazi persecution of Jews by changing his name to Imre Molnár. His mother and grandmother died in Auschwitz and he changed his surname once again to Lakatos in honor of Géza Lakatos. After the war, from 1947 he worked as an official in the Hungarian ministry of education. He also continued his education with a PhD at Debrecen University awarded in 1948 and he also studied at the Moscow State University under the supervision of Sofya Yanovskaya in 1949. When he returned, however, he found himself on the side of internal arguments within the Hungarian communist party and was imprisoned on charges of revisionism from 1950 to 1953. More of Lakatos activities in Hungary after World War II have recently become known, after his release, Lakatos returned to academic life, doing mathematical research and translating George Pólyas How to Solve It into Hungarian. Still nominally a communist, his views had shifted markedly. After the Soviet Union invaded Hungary in November 1956, Lakatos fled to Vienna and he received a doctorate in philosophy in 1961 from the University of Cambridge, his thesis advisor was R. B. The book Proofs and Refutations, The Logic of Mathematical Discovery, in 1960 he was appointed to a position in the London School of Economics, where he wrote on the philosophy of mathematics and the philosophy of science. The LSE philosophy of science department at that time included Karl Popper, Joseph Agassi and it was Agassi who first introduced Lakatos to Popper under the rubric of his applying a fallibilist methodology of conjectures and refutations to mathematics in his Cambridge PhD thesis. With co-editor Alan Musgrave, he edited the often cited Criticism and the Growth of Knowledge, published in 1970, the 1965 Colloquium included well-known speakers delivering papers in response to Thomas Kuhns The Structure of Scientific Revolutions. Lakatos remained at the London School of Economics until his death in 1974 of a heart attack at the age of just 51. The Lakatos Award was set up by the school in his memory and his last LSE lectures in scientific method in Lent Term 1973 along with parts of his correspondence with his friend and critic Paul Feyerabend have been published in For and Against Method. Lakatos philosophy of mathematics was inspired by both Hegels and Marxs dialectic, by Karl Poppers theory of knowledge, and by the work of mathematician George Pólya. The 1976 book Proofs and Refutations is based on the first three chapters of his four chapter 1961 doctoral thesis Essays in the logic of mathematical discovery
Imre Lakatos
–
Imre Lakatos, c. 1960s
52.
Falsificationism
–
Falsifiability or refutability of a statement, hypothesis, or theory is the inherent possibility that it can be proven false. A statement is called if it is possible to conceive of an observation or an argument which negates the statement in question. In this sense, falsify is synonymous with nullify, meaning to invalidate or shown to be false, thus, the term falsifiability is sometimes synonymous to testability. Some statements, such as It will be raining here in one years, are falsifiable in principle. The concern with falsifiability gained attention by way of philosopher of science Karl Poppers scientific epistemology falsificationism, the classical view of the philosophy of science is that it is the goal of science to prove hypotheses like All swans are white or to induce them from observational data. Popper argued that this would require the inference of a rule from a number of individual cases. However, if one finds one single swan that is not white, Falsificationism thus strives for questioning, for falsification, of hypotheses instead of proving them. For a statement to be questioned using observation, it needs to be at least theoretically possible that it can come into conflict with observation. A key observation of falsificationism is thus that a criterion of demarcation is needed to distinguish those statements that can come into conflict with observation, Popper chose falsifiability as the name of this criterion. My proposal is based upon an asymmetry between verifiability and falsifiability, an asymmetry which results from the form of universal statements. For these are never derivable from singular statements, but can be contradicted by singular statements, Popper stressed that unfalsifiable statements are important in science. Contrary to intuition, unfalsifiable statements can be embedded in —, for example, while all men are mortal is unfalsifiable, it is a logical consequence of the falsifiable theory that every man dies before he reaches the age of 150 years. Similarly, the ancient metaphysical and unfalsifiable idea of the existence of atoms has led to corresponding falsifiable modern theories, Popper invented the notion of metaphysical research programs to name such unfalsifiable ideas. Criticizability, in contrast to falsifiability, and thus rationality, may be comprehensive, though this claim is controversial, even proponents of Poppers philosophy. In work beginning in the 1930s, Popper gave falsifiability a renewed emphasis as a criterion of empirical statements in science, Popper noticed that two types of statements are of particular value to scientists. The first are statements of observations, such as there is a white swan, logicians call these statements singular existential statements, since they assert the existence of some particular thing. They are equivalent to a predicate calculus statement of the form, There exists an x such that x is a swan, the second are statements that categorize all instances of something, such as all swans are white. They are usually parsed in the form, For all x, if x is a swan, Scientific laws are commonly supposed to be of this type
Falsificationism
–
Are all swans white?
53.
Experiment
–
An experiment is a procedure carried out to support, refute, or validate a hypothesis. Experiments provide insight into cause-and-effect by demonstrating what outcome occurs when a particular factor is manipulated, experiments vary greatly in goal and scale, but always rely on repeatable procedure and logical analysis of the results. There also exists natural experimental studies, a child may carry out basic experiments to understand gravity, while teams of scientists may take years of systematic investigation to advance their understanding of a phenomenon. Experiments and other types of activities are very important to student learning in the science classroom. Experiments can raise test scores and help a student become more engaged and interested in the material they are learning, experiments can vary from personal and informal natural comparisons, to highly controlled. Uses of experiments vary considerably between the natural and human sciences, experiments typically include controls, which are designed to minimize the effects of variables other than the single independent variable. This increases the reliability of the results, often through a comparison between control measurements and the other measurements, scientific controls are a part of the scientific method. Ideally, all variables in an experiment are controlled and none are uncontrolled, in such an experiment, if all controls work as expected, it is possible to conclude that the experiment works as intended, and that results are due to the effect of the tested variable. In the scientific method, an experiment is a procedure that arbitrates between competing models or hypotheses. Researchers also use experimentation to test existing theories or new hypotheses to support or disprove them, an experiment usually tests a hypothesis, which is an expectation about how a particular process or phenomenon works. However, an experiment may also aim to answer a question, without a specific expectation about what the experiment reveals. If an experiment is conducted, the results usually either support or disprove the hypothesis. According to some philosophies of science, an experiment can never prove a hypothesis, on the other hand, an experiment that provides a counterexample can disprove a theory or hypothesis. An experiment must also control the possible confounding factors—any factors that would mar the accuracy or repeatability of the experiment or the ability to interpret the results, confounding is commonly eliminated through scientific controls and/or, in randomized experiments, through random assignment. In engineering and the sciences, experiments are a primary component of the scientific method. They are used to test theories and hypotheses about how physical processes work under particular conditions, typically, experiments in these fields focus on replication of identical procedures in hopes of producing identical results in each replication. In medicine and the sciences, the prevalence of experimental research varies widely across disciplines. In contrast to norms in the sciences, the focus is typically on the average treatment effect or another test statistic produced by the experiment
Experiment
–
Even very young children perform rudimentary experiments to learn about the world and how things work.
Experiment
–
Original map by John Snow showing the clusters of cholera cases in the London epidemic of 1854
54.
Liberal arts
–
Grammar, logic, and rhetoric were the core liberal arts, while arithmetic, geometry, the theory of music, and astronomy also played a part in education. In modern times, liberal arts education term that can be interpreted in different ways and it can refer to academic subjects such as literature, philosophy, mathematics, and social and physical sciences, or it can also refer to overall studies in a liberal arts degree program. For example, Harvard University offers a Bachelor of Arts degree, for both interpretations, the term generally refers to matters not relating to the professional, vocational, or technical curriculum. The four scientific artes – music, arithmetic, geometry and astronomy – were known from the time of Boethius onwards as the Quadrivium. After the 9th century, the three arts of the humanities – grammar, logic, and rhetoric – were classed as well as the Trivium. It was in that form that the seven liberal arts were studied in the medieval Western university. During the Middle Ages, logic gradually came to take predominance over the parts of the Trivium. In the Renaissance, the Italian humanists and their Northern counterparts, despite in many respects continuing the traditions of the Middle Ages, the ideal of a liberal arts, or humanistic education grounded in classical languages and literature, persisted until the middle of the twentieth century. Some subsections of the arts are in the trivium—the verbal arts, grammar, logic, and rhetoric, and in the quadrivium—the numerical arts, arithmetic, geometry, music. Analyzing and interpreting information is also included, the liberal arts education at the secondary school level prepares the student for higher education at a university. They are thus meant for the more academically minded students, in addition to the usual curriculum, students of a liberal arts education often study Latin and Ancient Greek. Some liberal arts education provide general education, others have a specific focus, today, a number of other areas of specialization exist, such as gymnasiums specializing in economics, technology or domestic sciences. In some countries, there is a notion of progymnasium, which is equivalent to beginning classes of the full gymnasium, here, the prefix pro is equivalent to pre. In the United States, liberal arts colleges are schools emphasizing undergraduate study in the liberal arts, in most parts of Europe, liberal arts education is deeply rooted. In Germany, Austria and countries influenced by their education system, the term is not to be mixed up with some modern educational concepts that use a similar wording. Educational institutions that see themselves in that tradition are often a Gymnasium and they aim at providing their pupils with comprehensive education in order to form personality with regard to a pupils own humanity as well as his/her innate intellectual skills. Going back to the tradition of the liberal arts in Europe, education in the above sense was freed from scholastic thinking. In particular, Wilhelm von Humboldt played a key role in that regard, universities encourage students to do so and offer respective opportunities, but do not make such activities part of the universitys curriculum
Liberal arts
–
Philosophia et septem artes liberales, The seven liberal arts – Picture from the Hortus deliciarum of Herrad of Landsberg (12th century)
Liberal arts
–
Page from Marriage of Mercury and Philology
55.
University
–
A university is an institution of higher education and research which grants academic degrees in various academic disciplines. Universities typically provide undergraduate education and postgraduate education, the word university is derived from the Latin universitas magistrorum et scholarium, which roughly means community of teachers and scholars. Universities were created in Italy and evolved from Cathedral schools for the clergy during the High Middle Ages, the original Latin word universitas refers in general to a number of persons associated into one body, a society, company, community, guild, corporation, etc. Like other guilds, they were self-regulating and determined the qualifications of their members, an important idea in the definition of a university is the notion of academic freedom. The first documentary evidence of this comes from early in the life of the first university, the University of Bologna adopted an academic charter, the Constitutio Habita, in 1158 or 1155, which guaranteed the right of a traveling scholar to unhindered passage in the interests of education. Today this is claimed as the origin of academic freedom and this is now widely recognised internationally - on 18 September 1988,430 university rectors signed the Magna Charta Universitatum, marking the 900th anniversary of Bolognas foundation. The number of universities signing the Magna Charta Universitatum continues to grow, the university is generally regarded as a formal institution that has its origin in the Medieval Christian setting. The earliest universities were developed under the aegis of the Latin Church by papal bull as studia generalia and it is possible, however, that the development of cathedral schools into universities was quite rare, with the University of Paris being an exception. Later they were founded by Kings or municipal administrations. In the early period, most new universities were founded from pre-existing schools. Many historians state that universities and cathedral schools were a continuation of the interest in learning promoted by monasteries, the first universities in Europe with a form of corporate/guild structure were the University of Bologna, the University of Paris, and the University of Oxford. The students had all the power … and dominated the masters, princes and leaders of city governments perceived the potential benefits of having a scholarly expertise develop with the ability to address difficult problems and achieve desired ends. The emergence of humanism was essential to understanding of the possible utility of universities as well as the revival of interest in knowledge gained from ancient Greek texts. The rediscovery of Aristotles works–more than 3000 pages of it would eventually be translated–fuelled a spirit of inquiry into natural processes that had begun to emerge in the 12th century. Some scholars believe that these represented one of the most important document discoveries in Western intellectual history. Richard Dales, for instance, calls the discovery of Aristotles works a turning point in the history of Western thought and this became the primary mission of lecturers, and the expectation of students. The university culture developed differently in northern Europe than it did in the south, Latin was the language of the university, used for all texts, lectures, disputations and examinations. Professors lectured on the books of Aristotle for logic, natural philosophy, and metaphysics, while Hippocrates, Galen, outside of these commonalities, great differences separated north and south, primarily in subject matter
University
–
Degree ceremony at the University of Oxford. The Pro-Vice-Chancellor in MA gown and hood, Proctor in official dress and new Doctors of Philosophy in scarlet full dress. Behind them, a bedel, a Doctor and Bachelors of Arts and Medicine graduate.
University
–
The University of Bologna is the oldest University in history, founded in 1088.
University
–
Meeting of doctors at the University of Paris. From a medieval manuscript.
University
–
Sapienza University of Rome is the largest university in Europe and one of the most prestigious European universities.
56.
Mathematical beauty
–
Mathematical beauty describes the notion that some mathematicians may derive aesthetic pleasure from their work, and from mathematics in general. They express this pleasure by describing mathematics as beautiful, Mathematicians describe mathematics as an art form or, at a minimum, as a creative activity. Comparisons are often made music and poetry. The true spirit of delight, the exaltation, the sense of being more than Man, Paul Erdős expressed his views on the ineffability of mathematics when he said, Why are numbers beautiful. Its like asking why is Beethovens Ninth Symphony beautiful, if you dont see why, someone cant tell you. If they arent beautiful, nothing is, Mathematicians describe an especially pleasing method of proof as elegant. Depending on context, this may mean, A proof that uses a minimum of additional assumptions or previous results, a proof that is unusually succinct. A proof that derives a result in a surprising way A proof that is based on new, a method of proof that can be easily generalized to solve a family of similar problems. In the search for an elegant proof, mathematicians often look for different independent ways to prove a result—the first proof that is found may not be the best. The theorem for which the greatest number of different proofs have been discovered is possibly the Pythagorean theorem, another theorem that has been proved in many different ways is the theorem of quadratic reciprocity—Carl Friedrich Gauss alone published eight different proofs of this theorem. Some mathematicians see beauty in mathematical results that establish connections between two areas of mathematics that at first sight appear to be unrelated and these results are often described as deep. While it is difficult to find agreement on whether a result is deep. One is Eulers identity, e i π +1 =0 and this is a special case of Eulers formula, which the physicist Richard Feynman called our jewel and the most remarkable formula in mathematics. Other examples of deep results include unexpected insights into mathematical structures, for example, Gausss Theorema Egregium is a deep theorem which relates a local phenomenon to a global phenomenon in a surprising way. In particular, the area of a triangle on a surface is proportional to the excess of the triangle. Another example is the theorem of calculus. The opposite of deep is trivial, sometimes, however, a statement of a theorem can be original enough to be considered deep, even though its proof is fairly obvious. In his A Mathematicians Apology, Hardy suggests that a proof or result possesses inevitability, unexpectedness
Mathematical beauty
–
Diagram from Leon Battista Alberti 's 1435 Della Pittura, with pillars in perspective on a grid
Mathematical beauty
–
An example of "beauty in method"—a simple and elegant proof of the Pythagorean theorem.
Mathematical beauty
–
Forms
57.
Architecture
–
Architecture is both the process and the product of planning, designing, and constructing buildings and other physical structures. Architectural works, in the form of buildings, are often perceived as cultural symbols. Historical civilizations are often identified with their surviving architectural achievements, Architecture can mean, A general term to describe buildings and other physical structures. The art and science of designing buildings and nonbuilding structures, the style of design and method of construction of buildings and other physical structures. A unifying or coherent form or structure Knowledge of art, science, technology, the design activity of the architect, from the macro-level to the micro-level. The practice of the architect, where architecture means offering or rendering services in connection with the design and construction of buildings. The earliest surviving work on the subject of architecture is De architectura. According to Vitruvius, a building should satisfy the three principles of firmitas, utilitas, venustas, commonly known by the original translation – firmness, commodity. An equivalent in modern English would be, Durability – a building should stand up robustly, utility – it should be suitable for the purposes for which it is used. Beauty – it should be aesthetically pleasing, according to Vitruvius, the architect should strive to fulfill each of these three attributes as well as possible. Leon Battista Alberti, who elaborates on the ideas of Vitruvius in his treatise, De Re Aedificatoria, saw beauty primarily as a matter of proportion, for Alberti, the rules of proportion were those that governed the idealised human figure, the Golden mean. The most important aspect of beauty was, therefore, an inherent part of an object, rather than something applied superficially, Gothic architecture, Pugin believed, was the only true Christian form of architecture. The 19th-century English art critic, John Ruskin, in his Seven Lamps of Architecture, Architecture was the art which so disposes and adorns the edifices raised by men. That the sight of them contributes to his health, power. For Ruskin, the aesthetic was of overriding significance and his work goes on to state that a building is not truly a work of architecture unless it is in some way adorned. For Ruskin, a well-constructed, well-proportioned, functional building needed string courses or rustication, but suddenly you touch my heart, you do me good. I am happy and I say, This is beautiful, le Corbusiers contemporary Ludwig Mies van der Rohe said Architecture starts when you carefully put two bricks together. The notable 19th-century architect of skyscrapers, Louis Sullivan, promoted an overriding precept to architectural design, function came to be seen as encompassing all criteria of the use, perception and enjoyment of a building, not only practical but also aesthetic, psychological and cultural
Architecture
–
Brunelleschi, in the building of the dome of Florence Cathedral in the early 15th-century, not only transformed the building and the city, but also the role and status of the architect.
Architecture
–
Section of Brunelleschi 's dome drawn by the architect Cigoli (c. 1600)
Architecture
–
The Parthenon, Athens, Greece, "the supreme example among architectural sites." (Fletcher).
Architecture
–
The Houses of Parliament, Westminster, master-planned by Charles Barry, with interiors and details by A.W.N. Pugin
58.
Path integral formulation
–
The path integral formulation of quantum mechanics is a description of quantum theory that generalizes the action principle of classical mechanics. Unlike previous methods, the path integral allows a physicist to easily change coordinates between very different canonical descriptions of the quantum system. Another advantage is that it is in easier to guess the correct form of the Lagrangian of a theory. Possible downsides of the approach include that unitarity of the S-matrix is obscure in the formulation, the path-integral approach has been proved to be equivalent to the other formalisms of quantum mechanics and quantum field theory. Thus, by deriving either approach from the other, problems associated with one or the other approach go away. The Schrödinger equation is an equation with an imaginary diffusion constant. The basic idea of the integral formulation can be traced back to Norbert Wiener. This idea was extended to the use of the Lagrangian in quantum mechanics by P. A. M. Dirac in his 1933 article, the complete method was developed in 1948 by Richard Feynman. Some preliminaries were worked out earlier in his work under the supervision of John Archibald Wheeler. The original motivation stemmed from the desire to obtain a quantum-mechanical formulation for the Wheeler–Feynman absorber theory using a Lagrangian as a starting point, in quantum mechanics, as in classical mechanics, the Hamiltonian is the generator of time translations. This means that the state at a later time differs from the state at the current time by the result of acting with the Hamiltonian operator. For states with an energy, this is a statement of the de Broglie relation between frequency and energy, and the general relation is consistent with that plus the superposition principle. The Hamiltonian in classical mechanics is derived from a Lagrangian, which is a fundamental quantity relative to special relativity. The Hamiltonian indicates how to march forward in time, but the time is different in different reference frames, so the Hamiltonian is different in different frames, and this type of symmetry is not apparent in the original formulation of quantum mechanics. The Hamiltonian is a function of the position and momentum at one time, the Lagrangian is a function of the position now and the position a little later. The relation between the two is by a Legendre transform, and the condition that determines the classical equations of motion is that the action has an extremum, in quantum mechanics, the Legendre transform is hard to interpret, because the motion is not over a definite trajectory. In classical mechanics, with discretization in time, the Legendre transform becomes ϵ H = p − ϵ L and p = ∂ L ∂ q ˙, where the partial derivative with respect to q ˙ holds q fixed. The inverse Legendre transform is ϵ L = ϵ p q ˙ − ϵ H, where q ˙ = ∂ H ∂ p, and the partial derivative now is with respect to p at fixed q
Path integral formulation
–
These are just three of the paths that contribute to the quantum amplitude for a particle moving from point A at some time t 0 to point B at some other time t 1.
59.
Quantum mechanics
–
Quantum mechanics, including quantum field theory, is a branch of physics which is the fundamental theory of nature at small scales and low energies of atoms and subatomic particles. Classical physics, the physics existing before quantum mechanics, derives from quantum mechanics as an approximation valid only at large scales, early quantum theory was profoundly reconceived in the mid-1920s. The reconceived theory is formulated in various specially developed mathematical formalisms, in one of them, a mathematical function, the wave function, provides information about the probability amplitude of position, momentum, and other physical properties of a particle. In 1803, Thomas Young, an English polymath, performed the famous experiment that he later described in a paper titled On the nature of light. This experiment played a role in the general acceptance of the wave theory of light. In 1838, Michael Faraday discovered cathode rays, Plancks hypothesis that energy is radiated and absorbed in discrete quanta precisely matched the observed patterns of black-body radiation. In 1896, Wilhelm Wien empirically determined a distribution law of black-body radiation, ludwig Boltzmann independently arrived at this result by considerations of Maxwells equations. However, it was only at high frequencies and underestimated the radiance at low frequencies. Later, Planck corrected this model using Boltzmanns statistical interpretation of thermodynamics and proposed what is now called Plancks law, following Max Plancks solution in 1900 to the black-body radiation problem, Albert Einstein offered a quantum-based theory to explain the photoelectric effect. Among the first to study quantum phenomena in nature were Arthur Compton, C. V. Raman, robert Andrews Millikan studied the photoelectric effect experimentally, and Albert Einstein developed a theory for it. In 1913, Peter Debye extended Niels Bohrs theory of structure, introducing elliptical orbits. This phase is known as old quantum theory, according to Planck, each energy element is proportional to its frequency, E = h ν, where h is Plancks constant. Planck cautiously insisted that this was simply an aspect of the processes of absorption and emission of radiation and had nothing to do with the reality of the radiation itself. In fact, he considered his quantum hypothesis a mathematical trick to get the right rather than a sizable discovery. He won the 1921 Nobel Prize in Physics for this work, Einstein further developed this idea to show that an electromagnetic wave such as light could also be described as a particle, with a discrete quantum of energy that was dependent on its frequency. The Copenhagen interpretation of Niels Bohr became widely accepted, in the mid-1920s, developments in quantum mechanics led to its becoming the standard formulation for atomic physics. In the summer of 1925, Bohr and Heisenberg published results that closed the old quantum theory, out of deference to their particle-like behavior in certain processes and measurements, light quanta came to be called photons. From Einsteins simple postulation was born a flurry of debating, theorizing, thus, the entire field of quantum physics emerged, leading to its wider acceptance at the Fifth Solvay Conference in 1927
Quantum mechanics
–
Max Planck is considered the father of the quantum theory.
Quantum mechanics
–
Solution to Schrödinger's equation for the hydrogen atom at different energy levels. The brighter areas represent a higher probability of finding an electron
Quantum mechanics
–
The 1927 Solvay Conference in Brussels.
60.
Fundamental interaction
–
In physics, the fundamental interactions, also known as fundamental forces, are the interactions that do not appear to be reducible to more basic interactions. There are four conventionally accepted fundamental interactions—gravitational, electromagnetic, strong, each one is described mathematically as a field. The gravitational force is modelled as a classical field. The other three, part of the Standard Model of particle physics, are described as discrete quantum fields, and their interactions are carried by a quantum. The strong and weak interactions have short ranges, producing forces at minuscule, subatomic distances, the strong interaction, which is carried by the gluon particle, is responsible for the binding of quarks together to form hadrons, such as protons and neutrons. As a residual effect, it creates the force that binds the latter particles to form atomic nuclei. The weak interaction, which is carried by the W and Z particles, also acts on the nucleus, the other two, electromagnetism and gravity, produce significant forces at macroscopic scales where the effects can be seen directly in everyday life. The electromagnetic force, carried by the photon, creates electric and magnetic fields, Electromagnetic forces tend to cancel each other out when large collections of objects are considered, so over the largest distances, gravity tends to be the dominant force. Other theorists seek to unite the electroweak and strong fields within a Grand Unified Theory, some theories, notably string theory, seek both QG and GUT within one framework, unifying all four fundamental interactions along with mass generation within a theory of everything. A few researchers have interpreted various anomalous observations in physics as evidence for a fifth force, inferring that all objects bearing mass approach at a constant rate, but collide by impact proportional to their masses, Newton inferred that matter exhibits an attractive force. Thus Newtons theory violated the first principle of mechanical philosophy, as stated by Descartes, conversely, during the 1820s, when explaining magnetism, Michael Faraday inferred a field filling space and transmitting that force. Faraday conjectured that ultimately, all unified into one. In the early 1870s, James Clerk Maxwell unified electricity and magnetism as effects of a field whose third consequence was light. The Standard Model of particle physics was developed throughout the half of the 20th century. In the Standard Model, the electromagnetic, strong, and weak interactions associate with elementary particles, for predictive success with QMs probabilistic outcomes, particle physics conventionally models QM events across a field set to special relativity, altogether relativistic quantum field theory. Force particles, called gauge bosons—force carriers or messenger particles of underlying fields—interact with matter particles, everyday matter is atoms, composed of three fermion types, up-quarks and down-quarks constituting, as well as electrons orbiting, the atoms nucleus. The electromagnetic interaction was modelled with the interaction, whose force carriers are W and Z bosons, traversing the minuscule distance. Electroweak interaction would operate at high temperatures as soon after the presumed Big Bang
Fundamental interaction
–
The Standard Model of elementary particles, with the fermions in the first three columns, the gauge bosons in the fourth column, and the Higgs boson in the fifth column
61.
Aesthetics
–
Aesthetics is a branch of philosophy that explores the nature of art, beauty, and taste, with the creation and appreciation of beauty. It is more defined as the study of sensory or sensori-emotional values, sometimes called judgements of sentiment. More broadly, scholars in the field define aesthetics as critical reflection on art, in modern English, the term aesthetic can also refer to a set of principles underlying the works of a particular art movement or theory, one speaks, for example, of the Cubist aesthetic. The word aesthetic is derived from the Greek αἰσθητικός, which in turn was derived from αἰσθάνομαι, for some, aesthetics is considered a synonym for the philosophy of art since Hegel, while others insist that there is a significant distinction between these closely related fields. In practice, aesthetic judgement refers to the sensory contemplation or appreciation of an object, philosophical aesthetics has not only to speak about art and to produce judgments about art works, but has also to give a definition of what art is. Art is an entity for philosophy, because art deals with the senses. Hence, there are two different conceptions of art in aesthetics, art as knowledge or art as action, any aesthetic doctrines that guided the production and interpretation of prehistoric art are mostly unknown. Western aesthetics usually refers to Greek philosophers as the earliest source of aesthetic considerations. Plato believed in beauty as a form in which beautiful objects partake and he felt that beautiful objects incorporated proportion, harmony, and unity among their parts. Similarly, in the Metaphysics, Aristotle found that the elements of beauty were order, symmetry. From the late 17th to the early 20th century Western aesthetics underwent a revolution into what is often called modernism. German and British thinkers emphasized beauty as the key component of art and of the aesthetic experience, and saw art as necessarily aiming at absolute beauty. For Alexander Gottlieb Baumgarten aesthetics is the science of the experiences, a younger sister of logic. For Immanuel Kant the aesthetic experience of beauty is a judgment of a subjective but similar human truth, however, beauty cannot be reduced to any more basic set of features. For Friedrich Schiller aesthetic appreciation of beauty is the most perfect reconciliation of the sensual and rational parts of human nature, for Friedrich Wilhelm Joseph Schelling, the philosophy of art is the organon of philosophy concerning the relation between man and nature. So aesthetics began now to be the name for the philosophy of art, Friedrich von Schlegel, August Wilhelm Schlegel, Friedrich Schleiermacher and Georg Wilhelm Friedrich Hegel also gave lectures on aesthetics as philosophy of art after 1800. For Hegel, all culture is a matter of absolute spirit coming to be manifest to itself, stage by stage, Art is the first stage in which the absolute spirit is manifest immediately to sense-perception, and is thus an objective rather than subjective revelation of beauty. It is thus for Schopenhauer one way to fight the suffering, the British were largely divided into intuitionist and analytic camps
Aesthetics
–
Bronze sculpture, thought to be either Poseidon or Zeus, National Archaeological Museum of Athens
Aesthetics
–
Cubist painting by Georges Braque, Violin and Candlestick (1910)
Aesthetics
–
William Hogarth, self-portrait, 1745
Aesthetics
–
Example of the Dada aesthetic, Marcel Duchamp 's Fountain 1917
62.
Beauty
–
Beauty is a characteristic of an animal, idea, object, person or place that provides a perceptual experience of pleasure or satisfaction. Beauty is studied as part of aesthetics, culture, social psychology and sociology, an ideal beauty is an entity which is admired, or possesses features widely attributed to beauty in a particular culture, for perfection. The experience of beauty often involves an interpretation of some entity as being in balance and harmony with nature, because this can be a subjective experience, it is often said that beauty is in the eye of the beholder. The classical Greek noun that best translates to the English beauty or beautiful was κάλλος, kallos, however, kalos may and is also translated as ″good″ or ″of fine quality″ and thus has a broader meaning than only beautiful. Similarly, kallos was used differently from the English word beauty in that it first and foremost applied to humans, the Koine Greek word for beautiful was ὡραῖος, hōraios, an adjective etymologically coming from the word ὥρα, hōra, meaning hour. In Koine Greek, beauty was associated with being of ones hour. Thus, a ripe fruit was considered beautiful, whereas a woman trying to appear older or an older woman trying to appear younger would not be considered beautiful. In Attic Greek, hōraios had many meanings, including youthful, the earliest Western theory of beauty can be found in the works of early Greek philosophers from the pre-Socratic period, such as Pythagoras. The Pythagorean school saw a connection between mathematics and beauty. In particular, they noted that objects proportioned according to the golden ratio seemed more attractive, ancient Greek architecture is based on this view of symmetry and proportion. Plato considered beauty to be the Idea above all other Ideas, aristotle saw a relationship between the beautiful and virtue, arguing that Virtue aims at the beautiful. During the Gothic era, the classical canon of beauty was rejected as sinful. Later, Renaissance and Humanist thinkers rejected this view, and considered beauty to be the product of rational order, Renaissance artists and architects criticised the Gothic period as irrational and barbarian. This point of view of Gothic art lasted until Romanticism, in the 19th century, the Age of Reason saw a rise in an interest in beauty as a philosophical subject. For example, Scottish philosopher Francis Hutcheson argued that beauty is unity in variety and variety in unity. The Romantic poets, too, became concerned with the nature of beauty, with John Keats arguing in Ode on a Grecian Urn that Beauty is truth, truth beauty. Ye know on earth, and all ye need to know, in the Romantic period, Edmund Burke postulated a difference between beauty in its classical meaning and the sublime. The concept of the sublime, as explicated by Burke and Kant, suggested viewing Gothic art and architecture, though not in accordance with the standard of beauty
Beauty
–
Rayonnant rose window in Notre Dame de Paris. In Gothic architecture, light was considered the most beautiful revelation of God.
Beauty
–
For beauty as a characteristic of a person's appearance, see Physical attractiveness. For other uses, see Beauty (disambiguation).
Beauty
–
The Birth of Venus, by Sandro Botticelli. The goddess Venus is the classical personification of beauty.
Beauty
–
Fresco of a Roman woman from Pompeii, c. 50 AD
63.
Open set
–
In topology, an open set is an abstract concept generalizing the idea of an open interval in the real line. These conditions are very loose, and they allow enormous flexibility in the choice of open sets, in the two extremes, every set can be open, or no set can be open but the space itself and the empty set. In practice, however, open sets are usually chosen to be similar to the intervals of the real line. The notion of an open set provides a way to speak of nearness of points in a topological space. Once a choice of open sets is made, the properties of continuity, connectedness, and compactness, each choice of open sets for a space is called a topology. Although open sets and the topologies that they comprise are of importance in point-set topology. Intuitively, an open set provides a method to distinguish two points, for example, if about one point in a topological space there exists an open set not containing another point, the two points are referred to as topologically distinguishable. In this manner, one may speak of two subsets of a topological space are near without concretely defining a metric on the topological space. Therefore, topological spaces may be seen as a generalization of metric spaces, in the set of all real numbers, one has the natural Euclidean metric, that is, a function which measures the distance between two real numbers, d = |x - y|. Therefore, given a number, one can speak of the set of all points close to that real number. In essence, points within ε of x approximate x to an accuracy of degree ε, note that ε >0 always but as ε becomes smaller and smaller, one obtains points that approximate x to a higher and higher degree of accuracy. For example, if x =0 and ε =1, the points within ε of x are precisely the points of the interval, that is, however, with ε =0.5, the points within ε of x are precisely the points of. Clearly, these points approximate x to a degree of accuracy compared to when ε =1. The previous discussion shows, for the case x =0, in particular, sets of the form give us a lot of information about points close to x =0. Thus, rather than speaking of a concrete Euclidean metric, one may use sets to describe points close to x, thus, we find that in some sense, every real number is distance 0 away from 0. It may help in case to think of the measure as being a binary condition, all things in R are equally close to 0. In general, one refers to the family of sets containing 0, used to approximate 0, as a neighborhood basis, in fact, one may generalize these notions to an arbitrary set, rather than just the real numbers. In this case, given a point of that set, one may define a collection of sets around x, of course, this collection would have to satisfy certain properties for otherwise we may not have a well-defined method to measure distance
Open set
–
Example: The points (x, y) satisfying x 2 + y 2 = r 2 are colored blue. The points (x, y) satisfying x 2 + y 2 < r 2 are colored red. The red points form an open set. The blue points form a boundary set. The union of the red and blue points is a closed set.
64.
Homeomorphism
–
In the mathematical field of topology, a homeomorphism or topological isomorphism or bi continuous function is a continuous function between topological spaces that has a continuous inverse function. Homeomorphisms are the isomorphisms in the category of topological spaces—that is, two spaces with a homeomorphism between them are called homeomorphic, and from a topological viewpoint they are the same. The word homeomorphism comes from the Greek words ὅμοιος = similar and μορφή = shape, roughly speaking, a topological space is a geometric object, and the homeomorphism is a continuous stretching and bending of the object into a new shape. Thus, a square and a circle are homeomorphic to each other, but a sphere and a torus are not. A function f, X → Y between two spaces and is called a homeomorphism if it has the following properties, f is a bijection, f is continuous. A function with three properties is sometimes called bicontinuous. If such a function exists, we say X and Y are homeomorphic, a self-homeomorphism is a homeomorphism of a topological space and itself. The homeomorphisms form a relation on the class of all topological spaces. The resulting equivalence classes are called homeomorphism classes, the open interval is homeomorphic to the real numbers R for any a < b. The unit 2-disc D2 and the square in R2 are homeomorphic. An example of a mapping from the square to the disc is, in polar coordinates. The graph of a function is homeomorphic to the domain of the function. A differentiable parametrization of a curve is an homeomorphism between the domain of the parametrization and the curve, a chart of a manifold is an homeomorphism between an open subset of the manifold and an open subset of a Euclidean space. The stereographic projection is a homeomorphism between the sphere in R3 with a single point removed and the set of all points in R2. If G is a group, its inversion map x ↦ x −1 is a homeomorphism. Also, for any x ∈ G, the left translation y ↦ x y, the right translation y ↦ y x, rm and Rn are not homeomorphic for m ≠ n. The Euclidean real line is not homeomorphic to the circle as a subspace of R2, since the unit circle is compact as a subspace of Euclidean R2. The third requirement, that f −1 be continuous, is essential, consider for instance the function f, [0, 2π) → S1 defined by f =
Homeomorphism
–
A trefoil knot is homeomorphic to a circle, but not isotopic. Continuous mappings are not always realizable as deformations. Here the knot has been thickened to make the image understandable.
Homeomorphism
–
A continuous deformation between a coffee mug and a donut (torus) illustrating that they are homeomorphic. But there need not be a continuous deformation for two spaces to be homeomorphic — only a continuous mapping with a continuous inverse.
65.
Integral
–
In mathematics, an integral assigns numbers to functions in a way that can describe displacement, area, volume, and other concepts that arise by combining infinitesimal data. Integration is one of the two operations of calculus, with its inverse, differentiation, being the other. The area above the x-axis adds to the total and that below the x-axis subtracts from the total, roughly speaking, the operation of integration is the reverse of differentiation. For this reason, the integral may also refer to the related notion of the antiderivative. In this case, it is called an integral and is written. The integrals discussed in this article are those termed definite integrals, a rigorous mathematical definition of the integral was given by Bernhard Riemann. It is based on a procedure which approximates the area of a curvilinear region by breaking the region into thin vertical slabs. A line integral is defined for functions of two or three variables, and the interval of integration is replaced by a curve connecting two points on the plane or in the space. In a surface integral, the curve is replaced by a piece of a surface in the three-dimensional space and this method was further developed and employed by Archimedes in the 3rd century BC and used to calculate areas for parabolas and an approximation to the area of a circle. A similar method was developed in China around the 3rd century AD by Liu Hui. This method was used in the 5th century by Chinese father-and-son mathematicians Zu Chongzhi. The next significant advances in integral calculus did not begin to appear until the 17th century, further steps were made in the early 17th century by Barrow and Torricelli, who provided the first hints of a connection between integration and differentiation. Barrow provided the first proof of the theorem of calculus. Wallis generalized Cavalieris method, computing integrals of x to a power, including negative powers. The major advance in integration came in the 17th century with the independent discovery of the theorem of calculus by Newton. The theorem demonstrates a connection between integration and differentiation and this connection, combined with the comparative ease of differentiation, can be exploited to calculate integrals. In particular, the theorem of calculus allows one to solve a much broader class of problems. Equal in importance is the mathematical framework that both Newton and Leibniz developed
Integral
–
A definite integral of a function can be represented as the signed area of the region bounded by its graph.
66.
Set theory
–
Set theory is a branch of mathematical logic that studies sets, which informally are collections of objects. Although any type of object can be collected into a set, set theory is applied most often to objects that are relevant to mathematics, the language of set theory can be used in the definitions of nearly all mathematical objects. The modern study of set theory was initiated by Georg Cantor, Set theory is commonly employed as a foundational system for mathematics, particularly in the form of Zermelo–Fraenkel set theory with the axiom of choice. Beyond its foundational role, set theory is a branch of mathematics in its own right, contemporary research into set theory includes a diverse collection of topics, ranging from the structure of the real number line to the study of the consistency of large cardinals. Mathematical topics typically emerge and evolve through interactions among many researchers, Set theory, however, was founded by a single paper in 1874 by Georg Cantor, On a Property of the Collection of All Real Algebraic Numbers. Since the 5th century BC, beginning with Greek mathematician Zeno of Elea in the West and early Indian mathematicians in the East, especially notable is the work of Bernard Bolzano in the first half of the 19th century. Modern understanding of infinity began in 1867–71, with Cantors work on number theory, an 1872 meeting between Cantor and Richard Dedekind influenced Cantors thinking and culminated in Cantors 1874 paper. Cantors work initially polarized the mathematicians of his day, while Karl Weierstrass and Dedekind supported Cantor, Leopold Kronecker, now seen as a founder of mathematical constructivism, did not. This utility of set theory led to the article Mengenlehre contributed in 1898 by Arthur Schoenflies to Kleins encyclopedia, in 1899 Cantor had himself posed the question What is the cardinal number of the set of all sets. Russell used his paradox as a theme in his 1903 review of continental mathematics in his The Principles of Mathematics, in 1906 English readers gained the book Theory of Sets of Points by William Henry Young and his wife Grace Chisholm Young, published by Cambridge University Press. The momentum of set theory was such that debate on the paradoxes did not lead to its abandonment, the work of Zermelo in 1908 and Abraham Fraenkel in 1922 resulted in the set of axioms ZFC, which became the most commonly used set of axioms for set theory. The work of such as Henri Lebesgue demonstrated the great mathematical utility of set theory. Set theory is used as a foundational system, although in some areas category theory is thought to be a preferred foundation. Set theory begins with a binary relation between an object o and a set A. If o is a member of A, the notation o ∈ A is used, since sets are objects, the membership relation can relate sets as well. A derived binary relation between two sets is the relation, also called set inclusion. If all the members of set A are also members of set B, then A is a subset of B, for example, is a subset of, and so is but is not. As insinuated from this definition, a set is a subset of itself, for cases where this possibility is unsuitable or would make sense to be rejected, the term proper subset is defined
Set theory
–
Georg Cantor
Set theory
–
A Venn diagram illustrating the intersection of two sets.
67.
Arithmetic
–
Arithmetic is a branch of mathematics that consists of the study of numbers, especially the properties of the traditional operations between them—addition, subtraction, multiplication and division. Arithmetic is an part of number theory, and number theory is considered to be one of the top-level divisions of modern mathematics, along with algebra, geometry. The terms arithmetic and higher arithmetic were used until the beginning of the 20th century as synonyms for number theory and are still used to refer to a wider part of number theory. The earliest written records indicate the Egyptians and Babylonians used all the elementary arithmetic operations as early as 2000 BC and these artifacts do not always reveal the specific process used for solving problems, but the characteristics of the particular numeral system strongly influence the complexity of the methods. The hieroglyphic system for Egyptian numerals, like the later Roman numerals, in both cases, this origin resulted in values that used a decimal base but did not include positional notation. Complex calculations with Roman numerals required the assistance of a board or the Roman abacus to obtain the results. Early number systems that included positional notation were not decimal, including the system for Babylonian numerals. Because of this concept, the ability to reuse the same digits for different values contributed to simpler. The continuous historical development of modern arithmetic starts with the Hellenistic civilization of ancient Greece, prior to the works of Euclid around 300 BC, Greek studies in mathematics overlapped with philosophical and mystical beliefs. For example, Nicomachus summarized the viewpoint of the earlier Pythagorean approach to numbers, Greek numerals were used by Archimedes, Diophantus and others in a positional notation not very different from ours. Because the ancient Greeks lacked a symbol for zero, they used three separate sets of symbols, one set for the units place, one for the tens place, and one for the hundreds. Then for the place they would reuse the symbols for the units place. Their addition algorithm was identical to ours, and their multiplication algorithm was very slightly different. Their long division algorithm was the same, and the square root algorithm that was taught in school was known to Archimedes. He preferred it to Heros method of successive approximation because, once computed, a digit doesnt change, and the square roots of perfect squares, such as 7485696, terminate immediately as 2736. For numbers with a part, such as 546.934. The ancient Chinese used a positional notation. Because they also lacked a symbol for zero, they had one set of symbols for the place
Arithmetic
–
Arithmetic tables for children, Lausanne, 1835
Arithmetic
–
A scale calibrated in imperial units with an associated cost display.
68.
Geometry
–
Geometry is a branch of mathematics concerned with questions of shape, size, relative position of figures, and the properties of space. A mathematician who works in the field of geometry is called a geometer, Geometry arose independently in a number of early cultures as a practical way for dealing with lengths, areas, and volumes. Geometry began to see elements of mathematical science emerging in the West as early as the 6th century BC. By the 3rd century BC, geometry was put into a form by Euclid, whose treatment, Euclids Elements. Geometry arose independently in India, with texts providing rules for geometric constructions appearing as early as the 3rd century BC, islamic scientists preserved Greek ideas and expanded on them during the Middle Ages. By the early 17th century, geometry had been put on a solid footing by mathematicians such as René Descartes. Since then, and into modern times, geometry has expanded into non-Euclidean geometry and manifolds, while geometry has evolved significantly throughout the years, there are some general concepts that are more or less fundamental to geometry. These include the concepts of points, lines, planes, surfaces, angles, contemporary geometry has many subfields, Euclidean geometry is geometry in its classical sense. The mandatory educational curriculum of the majority of nations includes the study of points, lines, planes, angles, triangles, congruence, similarity, solid figures, circles, Euclidean geometry also has applications in computer science, crystallography, and various branches of modern mathematics. Differential geometry uses techniques of calculus and linear algebra to problems in geometry. It has applications in physics, including in general relativity, topology is the field concerned with the properties of geometric objects that are unchanged by continuous mappings. In practice, this often means dealing with large-scale properties of spaces, convex geometry investigates convex shapes in the Euclidean space and its more abstract analogues, often using techniques of real analysis. It has close connections to convex analysis, optimization and functional analysis, algebraic geometry studies geometry through the use of multivariate polynomials and other algebraic techniques. It has applications in areas, including cryptography and string theory. Discrete geometry is concerned mainly with questions of relative position of simple objects, such as points. It shares many methods and principles with combinatorics, Geometry has applications to many fields, including art, architecture, physics, as well as to other branches of mathematics. The earliest recorded beginnings of geometry can be traced to ancient Mesopotamia, the earliest known texts on geometry are the Egyptian Rhind Papyrus and Moscow Papyrus, the Babylonian clay tablets such as Plimpton 322. For example, the Moscow Papyrus gives a formula for calculating the volume of a truncated pyramid, later clay tablets demonstrate that Babylonian astronomers implemented trapezoid procedures for computing Jupiters position and motion within time-velocity space
Geometry
–
Visual checking of the Pythagorean theorem for the (3, 4, 5) triangle as in the Chou Pei Suan Ching 500–200 BC.
Geometry
–
An illustration of Desargues' theorem, an important result in Euclidean and projective geometry
Geometry
–
Geometry lessons in the 20th century
Geometry
–
A European and an Arab practicing geometry in the 15th century.
69.
Set (mathematics)
–
In mathematics, a set is a well-defined collection of distinct objects, considered as an object in its own right. For example, the numbers 2,4, and 6 are distinct objects when considered separately, Sets are one of the most fundamental concepts in mathematics. Developed at the end of the 19th century, set theory is now a part of mathematics. In mathematics education, elementary topics such as Venn diagrams are taught at a young age, the German word Menge, rendered as set in English, was coined by Bernard Bolzano in his work The Paradoxes of the Infinite. A set is a collection of distinct objects. The objects that make up a set can be anything, numbers, people, letters of the alphabet, other sets, Sets are conventionally denoted with capital letters. Sets A and B are equal if and only if they have precisely the same elements. Cantors definition turned out to be inadequate, instead, the notion of a set is taken as a notion in axiomatic set theory. There are two ways of describing, or specifying the members of, a set, one way is by intensional definition, using a rule or semantic description, A is the set whose members are the first four positive integers. B is the set of colors of the French flag, the second way is by extension – that is, listing each member of the set. An extensional definition is denoted by enclosing the list of members in curly brackets, one often has the choice of specifying a set either intensionally or extensionally. In the examples above, for instance, A = C and B = D, there are two important points to note about sets. First, in a definition, a set member can be listed two or more times, for example. However, per extensionality, two definitions of sets which differ only in one of the definitions lists set members multiple times, define, in fact. Hence, the set is identical to the set. The second important point is that the order in which the elements of a set are listed is irrelevant and we can illustrate these two important points with an example, = =. For sets with many elements, the enumeration of members can be abbreviated, for instance, the set of the first thousand positive integers may be specified extensionally as, where the ellipsis indicates that the list continues in the obvious way. Ellipses may also be used where sets have infinitely many members, thus the set of positive even numbers can be written as
Set (mathematics)
–
A set of polygons in a Venn diagram
70.
Computational complexity theory
–
A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying the amount of resources needed to solve them, such as time and storage. Other complexity measures are used, such as the amount of communication, the number of gates in a circuit. One of the roles of computational complexity theory is to determine the limits on what computers can. Closely related fields in computer science are analysis of algorithms. More precisely, computational complexity theory tries to classify problems that can or cannot be solved with appropriately restricted resources, a computational problem can be viewed as an infinite collection of instances together with a solution for every instance. The input string for a problem is referred to as a problem instance. In computational complexity theory, a problem refers to the question to be solved. In contrast, an instance of this problem is a rather concrete utterance, for example, consider the problem of primality testing. The instance is a number and the solution is yes if the number is prime, stated another way, the instance is a particular input to the problem, and the solution is the output corresponding to the given input. For this reason, complexity theory addresses computational problems and not particular problem instances, when considering computational problems, a problem instance is a string over an alphabet. Usually, the alphabet is taken to be the binary alphabet, as in a real-world computer, mathematical objects other than bitstrings must be suitably encoded. For example, integers can be represented in binary notation, and graphs can be encoded directly via their adjacency matrices and this can be achieved by ensuring that different representations can be transformed into each other efficiently. Decision problems are one of the objects of study in computational complexity theory. A decision problem is a type of computational problem whose answer is either yes or no. A decision problem can be viewed as a language, where the members of the language are instances whose output is yes. The objective is to decide, with the aid of an algorithm, if the algorithm deciding this problem returns the answer yes, the algorithm is said to accept the input string, otherwise it is said to reject the input. An example of a problem is the following
Computational complexity theory
–
A traveling salesman tour through Germany ’s 15 largest cities.
71.
Information theory
–
Information theory studies the quantification, storage, and communication of information. A key measure in information theory is entropy, entropy quantifies the amount of uncertainty involved in the value of a random variable or the outcome of a random process. For example, identifying the outcome of a coin flip provides less information than specifying the outcome from a roll of a die. Some other important measures in information theory are mutual information, channel capacity, error exponents, applications of fundamental topics of information theory include lossless data compression, lossy data compression, and channel coding. The field is at the intersection of mathematics, statistics, computer science, physics, neurobiology, Information theory studies the transmission, processing, utilization, and extraction of information. Abstractly, information can be thought of as the resolution of uncertainty, Information theory is a broad and deep mathematical theory, with equally broad and deep applications, amongst which is the vital field of coding theory. These codes can be subdivided into data compression and error-correction techniques. In the latter case, it took years to find the methods Shannons work proved were possible. A third class of information theory codes are cryptographic algorithms, concepts, methods and results from coding theory and information theory are widely used in cryptography and cryptanalysis. See the article ban for a historical application, Information theory is also used in information retrieval, intelligence gathering, gambling, statistics, and even in musical composition. Prior to this paper, limited information-theoretic ideas had been developed at Bell Labs, the unit of information was therefore the decimal digit, much later renamed the hartley in his honour as a unit or scale or measure of information. Alan Turing in 1940 used similar ideas as part of the analysis of the breaking of the German second world war Enigma ciphers. Much of the mathematics behind information theory with events of different probabilities were developed for the field of thermodynamics by Ludwig Boltzmann, Information theory is based on probability theory and statistics. Information theory often concerns itself with measures of information of the associated with random variables. Important quantities of information are entropy, a measure of information in a random variable, and mutual information. The choice of base in the following formulae determines the unit of information entropy that is used. A common unit of information is the bit, based on the binary logarithm, other units include the nat, which is based on the natural logarithm, and the hartley, which is based on the common logarithm. In what follows, an expression of the form p log p is considered by convention to be equal to zero whenever p =0 and this is justified because lim p →0 + p log p =0 for any logarithmic base
Information theory
–
A picture showing scratches on the readable surface of a CD-R. Music and data CDs are coded using error correcting codes and thus can still be read even if they have minor scratches using error detection and correction.
Information theory
–
Entropy of a Bernoulli trial as a function of success probability, often called the binary entropy function,. The entropy is maximized at 1 bit per trial when the two possible outcomes are equally probable, as in an unbiased coin toss.
72.
Turing machine
–
Despite the models simplicity, given any computer algorithm, a Turing machine can be constructed that is capable of simulating that algorithms logic. The machine operates on an infinite memory tape divided into discrete cells, the machine positions its head over a cell and reads the symbol there. The Turing machine was invented in 1936 by Alan Turing, who called it an a-machine, thus, Turing machines prove fundamental limitations on the power of mechanical computation. Turing completeness is the ability for a system of instructions to simulate a Turing machine, a Turing machine is a general example of a CPU that controls all data manipulation done by a computer, with the canonical machine using sequential memory to store data. More specifically, it is a capable of enumerating some arbitrary subset of valid strings of an alphabet. Assuming a black box, the Turing machine cannot know whether it will eventually enumerate any one specific string of the subset with a given program and this is due to the fact that the halting problem is unsolvable, which has major implications for the theoretical limits of computing. The Turing machine is capable of processing an unrestricted grammar, which implies that it is capable of robustly evaluating first-order logic in an infinite number of ways. This is famously demonstrated through lambda calculus, a Turing machine that is able to simulate any other Turing machine is called a universal Turing machine. The thesis states that Turing machines indeed capture the notion of effective methods in logic and mathematics. Studying their abstract properties yields many insights into computer science and complexity theory, at any moment there is one symbol in the machine, it is called the scanned symbol. The machine can alter the scanned symbol, and its behavior is in part determined by that symbol, however, the tape can be moved back and forth through the machine, this being one of the elementary operations of the machine. Any symbol on the tape may therefore eventually have an innings, the Turing machine mathematically models a machine that mechanically operates on a tape. On this tape are symbols, which the machine can read and write, one at a time, in the original article, Turing imagines not a mechanism, but a person whom he calls the computer, who executes these deterministic mechanical rules slavishly. If δ is not defined on the current state and the current tape symbol, Q0 ∈ Q is the initial state F ⊆ Q is the set of final or accepting states. The initial tape contents is said to be accepted by M if it eventually halts in a state from F, Anything that operates according to these specifications is a Turing machine. The 7-tuple for the 3-state busy beaver looks like this, Q = Γ = b =0 Σ = q 0 = A F = δ = see state-table below Initially all tape cells are marked with 0. In the words of van Emde Boas, p.6, The set-theoretical object provides only partial information on how the machine will behave and what its computations will look like. For instance, There will need to be many decisions on what the symbols actually look like, and a failproof way of reading and writing symbols indefinitely
Turing machine
–
The evolution of the busy-beaver's computation starts at the top and proceeds to the bottom.
Turing machine
–
An implementation of a Turing machine
Turing machine
–
A Turing machine realisation in LEGO
Turing machine
–
An experimental prototype to achieve Turing machine
73.
P = NP problem
–
The P versus NP problem is a major unsolved problem in computer science. Informally speaking, it asks whether every problem whose solution can be verified by a computer can also be quickly solved by a computer. The underlying issues were first discussed in the 1950s, in letters from John Nash to the National Security Agency and it is one of the seven Millennium Prize Problems selected by the Clay Mathematics Institute to carry a US$1,000,000 prize for the first correct solution. The general class of questions for which some algorithm can provide an answer in time is called class P or just P. For some questions, there is no way to find an answer quickly. The class of questions for which an answer can be verified in polynomial time is called NP, consider the subset sum problem, an example of a problem that is easy to verify, but whose answer may be difficult to compute. Given a set of integers, does some nonempty subset of them sum to 0, for instance, does a subset of the set add up to 0. The answer yes, because the subset adds up to zero can be verified with three additions. There is no algorithm to find such a subset in polynomial time. An answer to the P = NP question would determine whether problems that can be verified in polynomial time, like the subset-sum problem, can also be solved in polynomial time. Although the P versus NP problem was defined in 1971, there were previous inklings of the problems involved, the difficulty of proof. In 1955, mathematician John Nash wrote a letter to the NSA, if proved this would imply what we today would call P ≠ NP, since a proposed key can easily be verified in polynomial time. Another mention of the problem occurred in a 1956 letter written by Kurt Gödel to John von Neumann. The most common resources are time and space, in such analysis, a model of the computer for which time must be analyzed is required. Typically such models assume that the computer is deterministic and sequential, arguably the biggest open question in theoretical computer science concerns the relationship between those two classes, Is P equal to NP. In 2012,10 years later, the poll was repeated. To attack the P = NP question, the concept of NP-completeness is very useful, NP-complete problems are a set of problems to each of which any other NP-problem can be reduced in polynomial time, and whose solution may still be verified in polynomial time. That is, any NP problem can be transformed into any of the NP-complete problems, informally, an NP-complete problem is an NP problem that is at least as tough as any other problem in NP
P = NP problem
–
Diagram of complexity classes provided that P ≠ NP. The existence of problems within NP but outside both P and NP -complete, under that assumption, was established by Ladner's theorem.
74.
Theory of computation
–
In theoretical computer science and mathematics, the theory of computation is the branch that deals with how efficiently problems can be solved on a model of computation, using an algorithm. In order to perform a study of computation, computer scientists work with a mathematical abstraction of computers called a model of computation. There are several models in use, but the most commonly examined is the Turing machine and it might seem that the potentially infinite memory capacity is an unrealizable attribute, but any decidable problem solved by a Turing machine will always require only a finite amount of memory. So in principle, any problem that can be solved by a Turing machine can be solved by a computer that has an amount of memory. The theory of computation can be considered the creation of models of all kinds in the field of computer science, therefore, mathematics and logic are used. In the last century it became an independent academic discipline and was separated from mathematics, some pioneers of the theory of computation were Alonzo Church, Kurt Gödel, Alan Turing, Stephen Kleene, John von Neumann and Claude Shannon. Automata theory is the study of abstract machines and the problems that can be solved using these machines. These abstract machines are called automata, Automata comes from the Greek word which means that something is doing something by itself. Automata theory is closely related to formal language theory, as the automata are often classified by the class of formal languages they are able to recognize. An automaton can be a representation of a formal language that may be an infinite set. Automata are used as models for computing machines, and are used for proofs about computability. Language theory is a branch of mathematics concerned with describing languages as a set of operations over an alphabet and it is closely linked with automata theory, as automata are used to generate and recognize formal languages. Because automata are used as models for computation, formal languages are the mode of specification for any problem that must be computed. Computability theory deals primarily with the question of the extent to which a problem is solvable on a computer, much of computability theory builds on the halting problem result. Many mathematicians and computational theorists who study recursion theory will refer to it as computability theory, Complexity theory considers not only whether a problem can be solved at all on a computer, but also how efficiently the problem can be solved. In order to analyze how much time and space a given algorithm requires, for example, finding a particular number in a long list of numbers becomes harder as the list of numbers grows larger. If we say there are n numbers in the list, then if the list is not sorted or indexed in any way we may have to look at every number in order to find the number were seeking. We thus say that in order to solve this problem, the needs to perform a number of steps that grows linearly in the size of the problem
Theory of computation
–
An artistic representation of a Turing machine. Turing machines are frequently used as theoretical models for computing.
75.
Integer
–
An integer is a number that can be written without a fractional component. For example,21,4,0, and −2048 are integers, while 9.75, 5 1⁄2, the set of integers consists of zero, the positive natural numbers, also called whole numbers or counting numbers, and their additive inverses. This is often denoted by a boldface Z or blackboard bold Z standing for the German word Zahlen, ℤ is a subset of the sets of rational and real numbers and, like the natural numbers, is countably infinite. The integers form the smallest group and the smallest ring containing the natural numbers, in algebraic number theory, the integers are sometimes called rational integers to distinguish them from the more general algebraic integers. In fact, the integers are the integers that are also rational numbers. Like the natural numbers, Z is closed under the operations of addition and multiplication, that is, however, with the inclusion of the negative natural numbers, and, importantly,0, Z is also closed under subtraction. The integers form a ring which is the most basic one, in the following sense, for any unital ring. This universal property, namely to be an object in the category of rings. Z is not closed under division, since the quotient of two integers, need not be an integer, although the natural numbers are closed under exponentiation, the integers are not. The following lists some of the properties of addition and multiplication for any integers a, b and c. In the language of algebra, the first five properties listed above for addition say that Z under addition is an abelian group. As a group under addition, Z is a cyclic group, in fact, Z under addition is the only infinite cyclic group, in the sense that any infinite cyclic group is isomorphic to Z. The first four properties listed above for multiplication say that Z under multiplication is a commutative monoid. However, not every integer has an inverse, e. g. there is no integer x such that 2x =1, because the left hand side is even. This means that Z under multiplication is not a group, all the rules from the above property table, except for the last, taken together say that Z together with addition and multiplication is a commutative ring with unity. It is the prototype of all objects of algebraic structure. Only those equalities of expressions are true in Z for all values of variables, note that certain non-zero integers map to zero in certain rings. The lack of zero-divisors in the means that the commutative ring Z is an integral domain
Integer
–
Algebraic structure → Group theory Group theory
76.
Fermat's Last Theorem
–
In number theory, Fermats Last Theorem states that no three positive integers a, b, and c satisfy the equation an + bn = cn for any integer value of n greater than 2. The cases n =1 and n =2 have been known to have many solutions since antiquity. This theorem was first conjectured by Pierre de Fermat in 1637 in the margin of a copy of Arithmetica where he claimed he had a proof that was too large to fit in the margin. The first successful proof was released in 1994 by Andrew Wiles, the unsolved problem stimulated the development of algebraic number theory in the 19th century and the proof of the modularity theorem in the 20th century. The Pythagorean equation, x2 + y2 = z2, has an number of positive integer solutions for x, y, and z. Around 1637, Fermat wrote in the margin of a book that the general equation an + bn = cn had no solutions in positive integers. Although he claimed to have a proof of his conjecture, Fermat left no details of his proof. His claim was discovered some 30 years later, after his death and this claim, which came to be known as Fermats Last Theorem, stood unsolved in mathematics for the following three and a half centuries. The claim eventually became one of the most notable unsolved problems of mathematics, attempts to prove it prompted substantial development in number theory, and over time Fermats Last Theorem gained prominence as an unsolved problem in mathematics. With the special case n =4 proved, it suffices to prove the theorem for n that are prime numbers. Over the next two centuries, the conjecture was proved for only the primes 3,5, and 7, in the mid-19th century, Ernst Kummer extended this and proved the theorem for all regular primes, leaving irregular primes to be analyzed individually. Around 1955, Japanese mathematicians Goro Shimura and Yutaka Taniyama suspected a link might exist between elliptic curves and modular forms, two different areas of mathematics. Known at the time as the Taniyama–Shimura-Weil conjecture, and as the modularity theorem, it stood on its own and it was widely seen as significant and important in its own right, but was widely considered completely inaccessible to proof. In 1984, Gerhard Frey noticed an apparent link between the modularity theorem and Fermats Last Theorem and this potential link was confirmed two years later by Ken Ribet, who gave a conditional proof of Fermats Last Theorem that depended on the modularity theorem. On hearing this, English mathematician Andrew Wiles, who had a fascination with Fermats Last Theorem. In 1993, after six years working secretly on the problem, Wiless paper was massive in size and scope. A flaw was discovered in one part of his paper during peer review and required a further year and collaboration with a past student, Richard Taylor. As a result, the proof in 1995 was accompanied by a second smaller joint paper to that effect
Fermat's Last Theorem
–
The 1670 edition of Diophantus ' Arithmetica includes Fermat's commentary, particularly his "Last Theorem" (Observatio Domini Petri de Fermat).
Fermat's Last Theorem
–
Problem II.8 in the 1621 edition of the Arithmetica of Diophantus. On the right is the margin that was too small to contain Fermat's alleged proof of his "last theorem".
Fermat's Last Theorem
–
British mathematician Andrew Wiles
77.
Subset
–
In mathematics, especially in set theory, a set A is a subset of a set B, or equivalently B is a superset of A, if A is contained inside B, that is, all elements of A are also elements of B. The relationship of one set being a subset of another is called inclusion or sometimes containment, the subset relation defines a partial order on sets. The algebra of subsets forms a Boolean algebra in which the relation is called inclusion. For any set S, the inclusion relation ⊆ is an order on the set P of all subsets of S defined by A ≤ B ⟺ A ⊆ B. We may also partially order P by reverse set inclusion by defining A ≤ B ⟺ B ⊆ A, when quantified, A ⊆ B is represented as, ∀x. So for example, for authors, it is true of every set A that A ⊂ A. Other authors prefer to use the symbols ⊂ and ⊃ to indicate proper subset and superset, respectively and this usage makes ⊆ and ⊂ analogous to the inequality symbols ≤ and <. For example, if x ≤ y then x may or may not equal y, but if x < y, then x definitely does not equal y, and is less than y. Similarly, using the convention that ⊂ is proper subset, if A ⊆ B, then A may or may not equal B, the set A = is a proper subset of B =, thus both expressions A ⊆ B and A ⊊ B are true. The set D = is a subset of E =, thus D ⊆ E is true, any set is a subset of itself, but not a proper subset. The empty set, denoted by ∅, is also a subset of any given set X and it is also always a proper subset of any set except itself. These are two examples in both the subset and the whole set are infinite, and the subset has the same cardinality as the whole. The set of numbers is a proper subset of the set of real numbers. In this example, both sets are infinite but the set has a larger cardinality than the former set. Another example in an Euler diagram, Inclusion is the partial order in the sense that every partially ordered set is isomorphic to some collection of sets ordered by inclusion. The ordinal numbers are a simple example—if each ordinal n is identified with the set of all ordinals less than or equal to n, then a ≤ b if and only if ⊆. For the power set P of a set S, the partial order is the Cartesian product of k = |S| copies of the partial order on for which 0 <1. This can be illustrated by enumerating S = and associating with each subset T ⊆ S the k-tuple from k of which the ith coordinate is 1 if and only if si is a member of T
Subset
–
Euler diagram showing A is a proper subset of B and conversely B is a proper superset of A
78.
Rational number
–
In mathematics, a rational number is any number that can be expressed as the quotient or fraction p/q of two integers, a numerator p and a non-zero denominator q. Since q may be equal to 1, every integer is a rational number. The set of all numbers, often referred to as the rationals, is usually denoted by a boldface Q, it was thus denoted in 1895 by Giuseppe Peano after quoziente. The decimal expansion of a rational number always either terminates after a number of digits or begins to repeat the same finite sequence of digits over and over. Moreover, any repeating or terminating decimal represents a rational number and these statements hold true not just for base 10, but also for any other integer base. A real number that is not rational is called irrational, irrational numbers include √2, π, e, and φ. The decimal expansion of an irrational number continues without repeating, since the set of rational numbers is countable, and the set of real numbers is uncountable, almost all real numbers are irrational. Rational numbers can be defined as equivalence classes of pairs of integers such that q ≠0, for the equivalence relation defined by ~ if. In abstract algebra, the numbers together with certain operations of addition and multiplication form the archetypical field of characteristic zero. As such, it is characterized as having no proper subfield or, alternatively, finite extensions of Q are called algebraic number fields, and the algebraic closure of Q is the field of algebraic numbers. In mathematical analysis, the numbers form a dense subset of the real numbers. The real numbers can be constructed from the numbers by completion, using Cauchy sequences, Dedekind cuts. The term rational in reference to the set Q refers to the fact that a number represents a ratio of two integers. In mathematics, rational is often used as a noun abbreviating rational number, the adjective rational sometimes means that the coefficients are rational numbers. However, a curve is not a curve defined over the rationals. Any integer n can be expressed as the rational number n/1, a b = c d if and only if a d = b c. Where both denominators are positive, a b < c d if and only if a d < b c. If either denominator is negative, the fractions must first be converted into equivalent forms with positive denominators, through the equations, − a − b = a b, two fractions are added as follows, a b + c d = a d + b c b d
Rational number
–
A diagram showing a representation of the equivalent classes of pairs of integers
79.
Real number
–
In mathematics, a real number is a value that represents a quantity along a line. The adjective real in this context was introduced in the 17th century by René Descartes, the real numbers include all the rational numbers, such as the integer −5 and the fraction 4/3, and all the irrational numbers, such as √2. Included within the irrationals are the numbers, such as π. Real numbers can be thought of as points on a long line called the number line or real line. Any real number can be determined by a possibly infinite decimal representation, such as that of 8.632, the real line can be thought of as a part of the complex plane, and complex numbers include real numbers. These descriptions of the numbers are not sufficiently rigorous by the modern standards of pure mathematics. All these definitions satisfy the definition and are thus equivalent. The statement that there is no subset of the reals with cardinality greater than ℵ0. Simple fractions were used by the Egyptians around 1000 BC, the Vedic Sulba Sutras in, c.600 BC, around 500 BC, the Greek mathematicians led by Pythagoras realized the need for irrational numbers, in particular the irrationality of the square root of 2. Arabic mathematicians merged the concepts of number and magnitude into a general idea of real numbers. In the 16th century, Simon Stevin created the basis for modern decimal notation, in the 17th century, Descartes introduced the term real to describe roots of a polynomial, distinguishing them from imaginary ones. In the 18th and 19th centuries, there was work on irrational and transcendental numbers. Johann Heinrich Lambert gave the first flawed proof that π cannot be rational, Adrien-Marie Legendre completed the proof, Évariste Galois developed techniques for determining whether a given equation could be solved by radicals, which gave rise to the field of Galois theory. Charles Hermite first proved that e is transcendental, and Ferdinand von Lindemann, lindemanns proof was much simplified by Weierstrass, still further by David Hilbert, and has finally been made elementary by Adolf Hurwitz and Paul Gordan. The development of calculus in the 18th century used the set of real numbers without having defined them cleanly. The first rigorous definition was given by Georg Cantor in 1871, in 1874, he showed that the set of all real numbers is uncountably infinite but the set of all algebraic numbers is countably infinite. Contrary to widely held beliefs, his first method was not his famous diagonal argument, the real number system can be defined axiomatically up to an isomorphism, which is described hereafter. Another possibility is to start from some rigorous axiomatization of Euclidean geometry, from the structuralist point of view all these constructions are on equal footing
Real number
–
A symbol of the set of real numbers (ℝ)
80.
Continuous function
–
In mathematics, a continuous function is a function for which sufficiently small changes in the input result in arbitrarily small changes in the output. Otherwise, a function is said to be a discontinuous function, a continuous function with a continuous inverse function is called a homeomorphism. Continuity of functions is one of the concepts of topology. The introductory portion of this focuses on the special case where the inputs and outputs of functions are real numbers. In addition, this article discusses the definition for the general case of functions between two metric spaces. In order theory, especially in theory, one considers a notion of continuity known as Scott continuity. Other forms of continuity do exist but they are not discussed in this article, as an example, consider the function h, which describes the height of a growing flower at time t. By contrast, if M denotes the amount of money in an account at time t, then the function jumps at each point in time when money is deposited or withdrawn. A form of the definition of continuity was first given by Bernard Bolzano in 1817. Cauchy defined infinitely small quantities in terms of quantities. The formal definition and the distinction between pointwise continuity and uniform continuity were first given by Bolzano in the 1830s but the work wasnt published until the 1930s, all three of those nonequivalent definitions of pointwise continuity are still in use. Eduard Heine provided the first published definition of continuity in 1872. This is not a definition of continuity since the function f =1 x is continuous on its whole domain of R ∖ A function is continuous at a point if it does not have a hole or jump. A “hole” or “jump” in the graph of a function if the value of the function at a point c differs from its limiting value along points that are nearby. Such a point is called a discontinuity, a function is then continuous if it has no holes or jumps, that is, if it is continuous at every point of its domain. Otherwise, a function is discontinuous, at the points where the value of the function differs from its limiting value, there are several ways to make this definition mathematically rigorous. These definitions are equivalent to one another, so the most convenient definition can be used to determine whether a function is continuous or not. In the definitions below, f, I → R. is a function defined on a subset I of the set R of real numbers and this subset I is referred to as the domain of f
Continuous function
–
Illustration of the ε-δ-definition: for ε=0.5, c=2, the value δ=0.5 satisfies the condition of the definition.
81.
Quaternion
–
In mathematics, the quaternions are a number system that extends the complex numbers. They were first described by Irish mathematician William Rowan Hamilton in 1843, a feature of quaternions is that multiplication of two quaternions is noncommutative. Hamilton defined a quaternion as the quotient of two directed lines in a space or equivalently as the quotient of two vectors. Quaternions are generally represented in the form, a + bi + cj + dk where a, b, c, and d are real numbers, and i, j, and k are the fundamental quaternion units. In practical applications, they can be used other methods, such as Euler angles and rotation matrices, or as an alternative to them. In modern mathematical language, quaternions form a four-dimensional associative normed division algebra over the real numbers, in fact, the quaternions were the first noncommutative division algebra to be discovered. The algebra of quaternions is often denoted by H, or in blackboard bold by H and it can also be given by the Clifford algebra classifications Cℓ0,2 ≅ Cℓ03,0. These rings are also Euclidean Hurwitz algebras, of which quaternions are the largest associative algebra. The unit quaternions can be thought of as a choice of a structure on the 3-sphere S3 that gives the group Spin. Quaternion algebra was introduced by Hamilton in 1843, carl Friedrich Gauss had also discovered quaternions in 1819, but this work was not published until 1900. Hamilton knew that the numbers could be interpreted as points in a plane. Points in space can be represented by their coordinates, which are triples of numbers, however, Hamilton had been stuck on the problem of multiplication and division for a long time. He could not figure out how to calculate the quotient of the coordinates of two points in space. The great breakthrough in quaternions finally came on Monday 16 October 1843 in Dublin, as he walked along the towpath of the Royal Canal with his wife, the concepts behind quaternions were taking shape in his mind. When the answer dawned on him, Hamilton could not resist the urge to carve the formula for the quaternions, i2 = j2 = k2 = ijk = −1, into the stone of Brougham Bridge as he paused on it. On the following day, Hamilton wrote a letter to his friend and fellow mathematician, John T. Graves and this letter was later published in the London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, vol. xxv, pp 489–95. In the letter, Hamilton states, And here there dawned on me the notion that we must admit, in some sense, an electric circuit seemed to close, and a spark flashed forth. Hamilton called a quadruple with these rules of multiplication a quaternion, Hamiltons treatment is more geometric than the modern approach, which emphasizes quaternions algebraic properties
Quaternion
–
Quaternion plaque on Brougham (Broom) Bridge, Dublin, which says: Here as he walked by on the 16th of October 1843 Sir William Rowan Hamilton in a flash of genius discovered the fundamental formula for quaternion multiplication i 2 = j 2 = k 2 = ijk = −1 & cut it on a stone of this bridge
Quaternion
–
Graphical representation of quaternion units product as 90°-rotation in 4D-space, ij = k, ji = − k, ij = − ji
82.
Cardinal number
–
In mathematics, cardinal numbers, or cardinals for short, are a generalization of the natural numbers used to measure the cardinality of sets. The cardinality of a set is a natural number, the number of elements in the set. The transfinite cardinal numbers describe the sizes of infinite sets, cardinality is defined in terms of bijective functions. Two sets have the same cardinality if, and only if, in the case of finite sets, this agrees with the intuitive notion of size. In the case of sets, the behavior is more complex. It is also possible for a subset of an infinite set to have the same cardinality as the original set. There is a sequence of cardinal numbers,0,1,2,3, …, n, …, ℵ0, ℵ1, ℵ2, …, ℵ α, …. This sequence starts with the natural numbers including zero, which are followed by the aleph numbers, the aleph numbers are indexed by ordinal numbers. Under the assumption of the axiom of choice, this transfinite sequence includes every cardinal number, If one rejects that axiom, the situation is more complicated, with additional infinite cardinals that are not alephs. Cardinality is studied for its own sake as part of set theory and it is also a tool used in branches of mathematics including model theory, combinatorics, abstract algebra, and mathematical analysis. In category theory, the numbers form a skeleton of the category of sets. The notion of cardinality, as now understood, was formulated by Georg Cantor, cardinality can be used to compare an aspect of finite sets, e. g. the sets and are not equal, but have the same cardinality, namely three. Cantor applied his concept of bijection to infinite sets, e. g. the set of natural numbers N =, thus, all sets having a bijection with N he called denumerable sets and they all have the same cardinal number. This cardinal number is called ℵ0, aleph-null and he called the cardinal numbers of these infinite sets transfinite cardinal numbers. Cantor proved that any unbounded subset of N has the same cardinality as N and he later proved that the set of all real algebraic numbers is also denumerable. His proof used an argument with nested intervals, but in an 1891 paper he proved the result using his ingenious. The new cardinal number of the set of numbers is called the cardinality of the continuum. His continuum hypothesis is the proposition that c is the same as ℵ1 and this hypothesis has been found to be independent of the standard axioms of mathematical set theory, it can neither be proved nor disproved from the standard assumptions
Cardinal number
–
A bijective function, f: X → Y, from set X to set Y demonstrates that the sets have the same cardinality, in this case equal to the cardinal number 4.
83.
Aleph number
–
In mathematics, and in particular set theory, the aleph numbers are a sequence of numbers used to represent the cardinality of infinite sets that can be well-ordered. They are named after the symbol used to them, the Hebrew letter aleph. The cardinality of the numbers is ℵ0, the next larger cardinality is aleph-one ℵ1, then ℵ2. Continuing in this manner, it is possible to define a cardinal number ℵ α for every ordinal number α, the concept and notation are due to Georg Cantor, who defined the notion of cardinality and realized that infinite sets can have different cardinalities. The aleph numbers differ from the infinity commonly found in algebra, alephs measure the sizes of sets, infinity, on the other hand, is commonly defined as an extreme limit of the real number line, or an extreme point of the extended real number line. ℵ0 is the cardinality of the set of all natural numbers, the set of all finite ordinals, called ω or ω0, has cardinality ℵ0. A set has cardinality ℵ0 if and only if it is countably infinite, examples of such sets are the set of all square numbers, the set of all cubic numbers, the set of all fourth powers. These infinite ordinals, ω, ω+1, ω·2, ω2, ωω, for example, the sequence of all positive odd integers followed by all positive even integers is an ordering of the set of positive integers. If the axiom of choice holds, then ℵ0 is smaller than any other infinite cardinal. ℵ1 is the cardinality of the set of all ordinal numbers. This ω1 is itself a number larger than all countable ones. Therefore, ℵ1 is distinct from ℵ0, the definition of ℵ1 implies that no cardinal number is between ℵ0 and ℵ1. If the axiom of choice is used, it can be proved that the class of cardinal numbers is totally ordered. Using the axiom of choice we can show one of the most useful properties of the set ω1, any countable subset of ω1 has an upper bound in ω1. This fact is analogous to the situation in ℵ0, every set of natural numbers has a maximum which is also a natural number. ω1 is actually a useful concept, if somewhat exotic-sounding, an example application is closing with respect to countable operations, e. g. trying to explicitly describe the σ-algebra generated by an arbitrary collection of subsets. This is harder than most explicit descriptions of generation in algebra because in those cases we only have to close with respect to finite operations—sums, products, and the like. In popular books ℵ1 is sometimes defined to be 2 ℵ0
Aleph number
–
Aleph-naught, the smallest infinite cardinal number
84.
Function (mathematics)
–
In mathematics, a function is a relation between a set of inputs and a set of permissible outputs with the property that each input is related to exactly one output. An example is the function that each real number x to its square x2. The output of a function f corresponding to a x is denoted by f. In this example, if the input is −3, then the output is 9, likewise, if the input is 3, then the output is also 9, and we may write f =9. The input variable are sometimes referred to as the argument of the function, Functions of various kinds are the central objects of investigation in most fields of modern mathematics. There are many ways to describe or represent a function, some functions may be defined by a formula or algorithm that tells how to compute the output for a given input. Others are given by a picture, called the graph of the function, in science, functions are sometimes defined by a table that gives the outputs for selected inputs. A function could be described implicitly, for example as the inverse to another function or as a solution of a differential equation, sometimes the codomain is called the functions range, but more commonly the word range is used to mean, instead, specifically the set of outputs. For example, we could define a function using the rule f = x2 by saying that the domain and codomain are the numbers. The image of this function is the set of real numbers. In analogy with arithmetic, it is possible to define addition, subtraction, multiplication, another important operation defined on functions is function composition, where the output from one function becomes the input to another function. Linking each shape to its color is a function from X to Y, each shape is linked to a color, there is no shape that lacks a color and no shape that has more than one color. This function will be referred to as the color-of-the-shape function, the input to a function is called the argument and the output is called the value. The set of all permitted inputs to a function is called the domain of the function. Thus, the domain of the function is the set of the four shapes. The concept of a function does not require that every possible output is the value of some argument, a second example of a function is the following, the domain is chosen to be the set of natural numbers, and the codomain is the set of integers. The function associates to any number n the number 4−n. For example, to 1 it associates 3 and to 10 it associates −6, a third example of a function has the set of polygons as domain and the set of natural numbers as codomain
Function (mathematics)
–
A function f takes an input x, and returns a single output f (x). One metaphor describes the function as a "machine" or " black box " that for each input returns a corresponding output.
85.
Abstraction
–
An abstraction is the product of this process — a concept that acts as a super-categorical noun for all subordinate concepts, and connects any related concepts as a group, field, or category. Conceptual abstractions may be formed by filtering the information content of a concept or an observable phenomenon, in a type–token distinction, a type is more abstract than its tokens. Abstraction in its use is a material process, discussed in the themes below. Its development is likely to have been connected with the development of human language. Abstraction involves induction of ideas or the synthesis of particular facts into one theory about something. It is the opposite of specification, which is the analysis or breaking-down of an idea or abstraction into concrete facts. Thales believed that everything in the universe comes from one main substance and he deduced or specified from a general idea, everything is water, to the specific forms of water such as ice, snow, fog, and rivers. Modern scientists can use the opposite approach of abstraction, or going from particular facts collected into one general idea. This conceptual scheme emphasizes the inherent equality of both constituent and abstract data, thus avoiding problems arising from the distinction between abstract and concrete, in this sense the process of abstraction entails the identification of similarities between objects, and the process of associating these objects with an abstraction. For example, picture 1 below illustrates the concrete relationship Cat sits on Mat, for example, graph 1 below expresses the abstraction agent sits on location. This conceptual scheme entails no specific hierarchical taxonomy, only a progressive exclusion of detail, things that do not exist at any particular place and time are often considered abstract. By contrast, instances, or members, of such a thing might exist in many different places and times. Those abstract things are said to be multiply instantiated, in the sense of picture 1, picture 2. It is not sufficient, however, to abstract ideas as those that can be instantiated. Although the concepts cat and telephone are abstractions, they are not abstract in the sense of the objects in graph 1 below, perhaps confusingly, some philosophies refer to tropes as abstract particulars — e. g. the particular redness of a particular apple is an abstract particular. This is similar to qualia and sumbebekos, karl Marxs writing on the commodity abstraction recognizes a parallel process. The state as both concept and material practice exemplifies the two sides of this process of abstraction, conceptually, the current concept of the state is an abstraction from the much more concrete early-modern use as the standing or status of the prince, his visible estates. At the same time, materially, the practice of statehood is now constitutively and materially more abstract than at the time when princes ruled as the embodiment of extended power and that difference accounts for the ontological usefulness of the word abstract
Abstraction
–
Cat on Mat (picture 1)
86.
Ring (mathematics)
–
In mathematics, a ring is one of the fundamental algebraic structures used in abstract algebra. It consists of a set equipped with two operations that generalize the arithmetic operations of addition and multiplication. Through this generalization, theorems from arithmetic are extended to non-numerical objects such as polynomials, series, matrices, the conceptualization of rings started in the 1870s and completed in the 1920s. Key contributors include Dedekind, Hilbert, Fraenkel, and Noether, rings were first formalized as a generalization of Dedekind domains that occur in number theory, and of polynomial rings and rings of invariants that occur in algebraic geometry and invariant theory. Afterward, they proved to be useful in other branches of mathematics such as geometry. A ring is a group with a second binary operation that is associative, is distributive over the abelian group operation. By extension from the integers, the group operation is called addition. Whether a ring is commutative or not has profound implications on its behavior as an abstract object, as a result, commutative ring theory, commonly known as commutative algebra, is a key topic in ring theory. Its development has greatly influenced by problems and ideas occurring naturally in algebraic number theory. The most familiar example of a ring is the set of all integers, Z, −5, −4, −3, −2, −1,0,1,2,3,4,5. The familiar properties for addition and multiplication of integers serve as a model for the axioms for rings, a ring is a set R equipped with two binary operations + and · satisfying the following three sets of axioms, called the ring axioms 1. R is a group under addition, meaning that, + c = a + for all a, b, c in R. a + b = b + a for all a, b in R. There is an element 0 in R such that a +0 = a for all a in R, for each a in R there exists −a in R such that a + =0. R is a monoid under multiplication, meaning that, · c = a · for all a, b, c in R. There is an element 1 in R such that a ·1 = a and 1 · a = a for all a in R.3. Multiplication is distributive with respect to addition, a ⋅ = + for all a, b, c in R. · a = + for all a, b, c in R. As explained in § History below, many follow a alternative convention in which a ring is not defined to have a multiplicative identity. This article adopts the convention that, unless stated, a ring is assumed to have such an identity
Ring (mathematics)
–
Richard Dedekind, one of the founders of ring theory.
Ring (mathematics)
–
Chapter IX of David Hilbert 's Die Theorie der algebraischen Zahlkörper. The chapter title is Die Zahlringe des Körpers, literally "the number rings of the field". The word "ring" is the contraction of "Zahlring".
87.
Compass and straightedge constructions
–
The idealized ruler, known as a straightedge, is assumed to be infinite in length, and has no markings on it and only one edge. The compass is assumed to collapse when lifted from the page, more formally, the only permissible constructions are those granted by Euclids first three postulates. It turns out to be the case that every point constructible using straightedge, the ancient Greek mathematicians first conceived compass-and-straightedge constructions, and a number of ancient problems in plane geometry impose this restriction. The ancient Greeks developed many constructions, but in cases were unable to do so. Gauss showed that some polygons are constructible but that most are not, some of the most famous straightedge-and-compass problems were proven impossible by Pierre Wantzel in 1837, using the mathematical theory of fields. In spite of existing proofs of impossibility, some persist in trying to solve these problems, in terms of algebra, a length is constructible if and only if it represents a constructible number, and an angle is constructible if and only if its cosine is a constructible number. A number is constructible if and only if it can be using the four basic arithmetic operations. Circles can only be starting from two given points, the centre and a point on the circle. The compass may or may not collapse when its not drawing a circle, the straightedge is infinitely long, but it has no markings on it and has only one straight edge, unlike ordinary rulers. It can only be used to draw a segment between two points or to extend an existing segment. The modern compass generally does not collapse and several modern constructions use this feature and it would appear that the modern compass is a more powerful instrument than the ancient collapsing compass. However, by Proposition 2 of Book 1 of Euclids Elements, although the proposition is correct, its proofs have a long and checkered history. Eyeballing it and getting close does not count as a solution and that is, it must have a finite number of steps, and not be the limit of ever closer approximations. One of the purposes of Greek mathematics was to find exact constructions for various lengths, for example. The Greeks could not find constructions for these three problems, among others, Squaring the circle, Drawing a square the same area as a given circle, doubling the cube, Drawing a cube with twice the volume of a given cube. Trisecting the angle, Dividing a given angle into three smaller angles all of the same size, for 2000 years people tried to find constructions within the limits set above, and failed. All three have now been proven under mathematical rules to be generally impossible, the ancient Greek mathematicians first attempted compass-and-straightedge constructions, and they discovered how to construct sums, differences, products, ratios, and square roots of given lengths. They could also construct half of an angle, a square whose area is twice that of another square, a square having the same area as a given polygon
Compass and straightedge constructions
–
A compass
Compass and straightedge constructions
–
Creating a regular hexagon with a ruler and compass
88.
Linear algebra
–
Linear algebra is the branch of mathematics concerning vector spaces and linear mappings between such spaces. It includes the study of lines, planes, and subspaces, the set of points with coordinates that satisfy a linear equation forms a hyperplane in an n-dimensional space. The conditions under which a set of n hyperplanes intersect in a point is an important focus of study in linear algebra. Such an investigation is initially motivated by a system of linear equations containing several unknowns, such equations are naturally represented using the formalism of matrices and vectors. Linear algebra is central to both pure and applied mathematics, for instance, abstract algebra arises by relaxing the axioms of a vector space, leading to a number of generalizations. Functional analysis studies the infinite-dimensional version of the theory of vector spaces, combined with calculus, linear algebra facilitates the solution of linear systems of differential equations. Because linear algebra is such a theory, nonlinear mathematical models are sometimes approximated by linear models. The study of linear algebra first emerged from the study of determinants, determinants were used by Leibniz in 1693, and subsequently, Gabriel Cramer devised Cramers Rule for solving linear systems in 1750. Later, Gauss further developed the theory of solving linear systems by using Gaussian elimination, the study of matrix algebra first emerged in England in the mid-1800s. In 1844 Hermann Grassmann published his Theory of Extension which included foundational new topics of what is called linear algebra. In 1848, James Joseph Sylvester introduced the term matrix, which is Latin for womb, while studying compositions of linear transformations, Arthur Cayley was led to define matrix multiplication and inverses. Crucially, Cayley used a letter to denote a matrix. In 1882, Hüseyin Tevfik Pasha wrote the book titled Linear Algebra, the first modern and more precise definition of a vector space was introduced by Peano in 1888, by 1900, a theory of linear transformations of finite-dimensional vector spaces had emerged. Linear algebra took its form in the first half of the twentieth century. The use of matrices in quantum mechanics, special relativity, the origin of many of these ideas is discussed in the articles on determinants and Gaussian elimination. Linear algebra first appeared in American graduate textbooks in the 1940s, following work by the School Mathematics Study Group, U. S. high schools asked 12th grade students to do matrix algebra, formerly reserved for college in the 1960s. In France during the 1960s, educators attempted to teach linear algebra through finite-dimensional vector spaces in the first year of secondary school and this was met with a backlash in the 1980s that removed linear algebra from the curriculum. To better suit 21st century applications, such as mining and uncertainty analysis
Linear algebra
–
The three-dimensional Euclidean space R 3 is a vector space, and lines and planes passing through the origin are vector subspaces in R 3.
89.
Vector space
–
A vector space is a collection of objects called vectors, which may be added together and multiplied by numbers, called scalars in this context. Scalars are often taken to be numbers, but there are also vector spaces with scalar multiplication by complex numbers, rational numbers. The operations of addition and scalar multiplication must satisfy certain requirements, called axioms. Euclidean vectors are an example of a vector space and they represent physical quantities such as forces, any two forces can be added to yield a third, and the multiplication of a force vector by a real multiplier is another force vector. In the same vein, but in a more geometric sense, Vector spaces are the subject of linear algebra and are well characterized by their dimension, which, roughly speaking, specifies the number of independent directions in the space. Infinite-dimensional vector spaces arise naturally in mathematical analysis, as function spaces and these vector spaces are generally endowed with additional structure, which may be a topology, allowing the consideration of issues of proximity and continuity. Among these topologies, those that are defined by a norm or inner product are commonly used. This is particularly the case of Banach spaces and Hilbert spaces, historically, the first ideas leading to vector spaces can be traced back as far as the 17th centurys analytic geometry, matrices, systems of linear equations, and Euclidean vectors. Today, vector spaces are applied throughout mathematics, science and engineering, furthermore, vector spaces furnish an abstract, coordinate-free way of dealing with geometrical and physical objects such as tensors. This in turn allows the examination of local properties of manifolds by linearization techniques, Vector spaces may be generalized in several ways, leading to more advanced notions in geometry and abstract algebra. The concept of space will first be explained by describing two particular examples, The first example of a vector space consists of arrows in a fixed plane. This is used in physics to describe forces or velocities, given any two such arrows, v and w, the parallelogram spanned by these two arrows contains one diagonal arrow that starts at the origin, too. This new arrow is called the sum of the two arrows and is denoted v + w, when a is negative, av is defined as the arrow pointing in the opposite direction, instead. Such a pair is written as, the sum of two such pairs and multiplication of a pair with a number is defined as follows, + = and a =. The first example above reduces to one if the arrows are represented by the pair of Cartesian coordinates of their end points. A vector space over a field F is a set V together with two operations that satisfy the eight axioms listed below, elements of V are commonly called vectors. Elements of F are commonly called scalars, the second operation, called scalar multiplication takes any scalar a and any vector v and gives another vector av. In this article, vectors are represented in boldface to distinguish them from scalars
Vector space
–
Vector addition and scalar multiplication: a vector v (blue) is added to another vector w (red, upper illustration). Below, w is stretched by a factor of 2, yielding the sum v + 2 w.
90.
Graph theory
–
In mathematics graph theory is the study of graphs, which are mathematical structures used to model pairwise relations between objects. A graph in this context is made up of vertices, nodes, or points which are connected by edges, arcs, Graphs are one of the prime objects of study in discrete mathematics. Refer to the glossary of graph theory for basic definitions in graph theory, the following are some of the more basic ways of defining graphs and related mathematical structures. To avoid ambiguity, this type of graph may be described precisely as undirected, other senses of graph stem from different conceptions of the edge set. In one more generalized notion, V is a set together with a relation of incidence that associates with each two vertices. In another generalized notion, E is a multiset of unordered pairs of vertices, Many authors call this type of object a multigraph or pseudograph. All of these variants and others are described more fully below, the vertices belonging to an edge are called the ends or end vertices of the edge. A vertex may exist in a graph and not belong to an edge, V and E are usually taken to be finite, and many of the well-known results are not true for infinite graphs because many of the arguments fail in the infinite case. The order of a graph is |V|, its number of vertices, the size of a graph is |E|, its number of edges. The degree or valency of a vertex is the number of edges that connect to it, for an edge, graph theorists usually use the somewhat shorter notation xy. Graphs can be used to model many types of relations and processes in physical, biological, social, Many practical problems can be represented by graphs. Emphasizing their application to real-world systems, the network is sometimes defined to mean a graph in which attributes are associated with the nodes and/or edges. In computer science, graphs are used to represent networks of communication, data organization, computational devices, the flow of computation, etc. For instance, the structure of a website can be represented by a directed graph, in which the vertices represent web pages. A similar approach can be taken to problems in media, travel, biology, computer chip design. The development of algorithms to handle graphs is therefore of major interest in computer science, the transformation of graphs is often formalized and represented by graph rewrite systems. Graph-theoretic methods, in forms, have proven particularly useful in linguistics. Traditionally, syntax and compositional semantics follow tree-based structures, whose power lies in the principle of compositionality
Graph theory
–
A drawing of a graph.
91.
Order theory
–
Order theory is a branch of mathematics which investigates the intuitive notion of order using binary relations. It provides a framework for describing statements such as this is less than that or this precedes that. This article introduces the field and provides basic definitions, a list of order-theoretic terms can be found in the order theory glossary. Orders are everywhere in mathematics and related fields like computer science. The first order often discussed in primary school is the order on the natural numbers e. g.2 is less than 3,10 is greater than 5. This intuitive concept can be extended to orders on sets of numbers, such as the integers. The idea of being greater than or less than another number is one of the basic intuitions of number systems in general, other familiar examples of orderings are the alphabetical order of words in a dictionary and the genealogical property of lineal descent within a group of people. The notion of order is very general, extending beyond contexts that have an immediate, in other contexts orders may capture notions of containment or specialization. Abstractly, this type of order amounts to the relation, e. g. Pediatricians are physicians. However, many other orders do not and those orders like the subset-of relation for which there exist incomparable elements are called partial orders, orders for which every pair of elements is comparable are total orders. Order theory captures the intuition of orders that arises from such examples in a general setting and this is achieved by specifying properties that a relation ≤ must have to be a mathematical order. This more abstract approach makes sense, because one can derive numerous theorems in the general setting. These insights can then be transferred to many less abstract applications. Driven by the wide usage of orders, numerous special kinds of ordered sets have been defined. In addition, order theory does not restrict itself to the classes of ordering relations. A simple example of an order theoretic property for functions comes from analysis where monotone functions are frequently found and this section introduces ordered sets by building upon the concepts of set theory, arithmetic, and binary relations. Suppose that P is a set and that ≤ is a relation on P, a set with a partial order on it is called a partially ordered set, poset, or just an ordered set if the intended meaning is clear. By checking these properties, one sees that the well-known orders on natural numbers, integers, rational numbers
Order theory
–
Hasse diagram of the set of all divisors of 60, partially ordered by divisibility
92.
General relativity
–
General relativity is the geometric theory of gravitation published by Albert Einstein in 1915 and the current description of gravitation in modern physics. General relativity generalizes special relativity and Newtons law of gravitation, providing a unified description of gravity as a geometric property of space and time. In particular, the curvature of spacetime is directly related to the energy and momentum of whatever matter, the relation is specified by the Einstein field equations, a system of partial differential equations. Examples of such differences include gravitational time dilation, gravitational lensing, the redshift of light. The predictions of relativity have been confirmed in all observations. Although general relativity is not the only theory of gravity. Einsteins theory has important astrophysical implications, for example, it implies the existence of black holes—regions of space in which space and time are distorted in such a way that nothing, not even light, can escape—as an end-state for massive stars. The bending of light by gravity can lead to the phenomenon of gravitational lensing, General relativity also predicts the existence of gravitational waves, which have since been observed directly by physics collaboration LIGO. In addition, general relativity is the basis of current cosmological models of an expanding universe. Soon after publishing the special theory of relativity in 1905, Einstein started thinking about how to incorporate gravity into his new relativistic framework. In 1907, beginning with a thought experiment involving an observer in free fall. After numerous detours and false starts, his work culminated in the presentation to the Prussian Academy of Science in November 1915 of what are now known as the Einstein field equations. These equations specify how the geometry of space and time is influenced by whatever matter and radiation are present, the Einstein field equations are nonlinear and very difficult to solve. Einstein used approximation methods in working out initial predictions of the theory, but as early as 1916, the astrophysicist Karl Schwarzschild found the first non-trivial exact solution to the Einstein field equations, the Schwarzschild metric. This solution laid the groundwork for the description of the stages of gravitational collapse. In 1917, Einstein applied his theory to the universe as a whole, in line with contemporary thinking, he assumed a static universe, adding a new parameter to his original field equations—the cosmological constant—to match that observational presumption. By 1929, however, the work of Hubble and others had shown that our universe is expanding and this is readily described by the expanding cosmological solutions found by Friedmann in 1922, which do not require a cosmological constant. Lemaître used these solutions to formulate the earliest version of the Big Bang models, in which our universe has evolved from an extremely hot, Einstein later declared the cosmological constant the biggest blunder of his life
General relativity
–
A simulated black hole of 10 solar masses within the Milky Way, seen from a distance of 600 kilometers.
General relativity
–
Albert Einstein developed the theories of special and general relativity. Picture from 1921.
General relativity
–
Einstein cross: four images of the same astronomical object, produced by a gravitational lens
General relativity
–
Artist's impression of the space-borne gravitational wave detector LISA
93.
Topology
–
In mathematics, topology is concerned with the properties of space that are preserved under continuous deformations, such as stretching, crumpling and bending, but not tearing or gluing. This can be studied by considering a collection of subsets, called open sets, important topological properties include connectedness and compactness. Topology developed as a field of study out of geometry and set theory, through analysis of such as space, dimension. Such ideas go back to Gottfried Leibniz, who in the 17th century envisioned the geometria situs, Leonhard Eulers Seven Bridges of Königsberg Problem and Polyhedron Formula are arguably the fields first theorems. The term topology was introduced by Johann Benedict Listing in the 19th century, by the middle of the 20th century, topology had become a major branch of mathematics. It defines the basic notions used in all branches of topology. Algebraic topology tries to measure degrees of connectivity using algebraic constructs such as homology, differential topology is the field dealing with differentiable functions on differentiable manifolds. It is closely related to geometry and together they make up the geometric theory of differentiable manifolds. Geometric topology primarily studies manifolds and their embeddings in other manifolds, a particularly active area is low-dimensional topology, which studies manifolds of four or fewer dimensions. This includes knot theory, the study of mathematical knots, Topology, as a well-defined mathematical discipline, originates in the early part of the twentieth century, but some isolated results can be traced back several centuries. Among these are certain questions in geometry investigated by Leonhard Euler and his 1736 paper on the Seven Bridges of Königsberg is regarded as one of the first practical applications of topology. On 14 November 1750 Euler wrote to a friend that he had realised the importance of the edges of a polyhedron and this led to his polyhedron formula, V − E + F =2. Some authorities regard this analysis as the first theorem, signalling the birth of topology, further contributions were made by Augustin-Louis Cauchy, Ludwig Schläfli, Johann Benedict Listing, Bernhard Riemann and Enrico Betti. Listing introduced the term Topologie in Vorstudien zur Topologie, written in his native German, in 1847, the term topologist in the sense of a specialist in topology was used in 1905 in the magazine Spectator. Their work was corrected, consolidated and greatly extended by Henri Poincaré, in 1895 he published his ground-breaking paper on Analysis Situs, which introduced the concepts now known as homotopy and homology, which are now considered part of algebraic topology. Unifying the work on function spaces of Georg Cantor, Vito Volterra, Cesare Arzelà, Jacques Hadamard, Giulio Ascoli and others, Maurice Fréchet introduced the metric space in 1906. A metric space is now considered a case of a general topological space. In 1914, Felix Hausdorff coined the term topological space and gave the definition for what is now called a Hausdorff space, currently, a topological space is a slight generalization of Hausdorff spaces, given in 1922 by Kazimierz Kuratowski
Topology
–
Möbius strips, which have only one surface and one edge, are a kind of object studied in topology.
94.
Differential geometry
–
Differential geometry is a mathematical discipline that uses the techniques of differential calculus, integral calculus, linear algebra and multilinear algebra to study problems in geometry. The theory of plane and space curves and surfaces in the three-dimensional Euclidean space formed the basis for development of differential geometry during the 18th century, since the late 19th century, differential geometry has grown into a field concerned more generally with the geometric structures on differentiable manifolds. Differential geometry is related to differential topology and the geometric aspects of the theory of differential equations. The differential geometry of surfaces captures many of the key ideas, Differential geometry arose and developed as a result of and in connection to the mathematical analysis of curves and surfaces. These unanswered questions indicated greater, hidden relationships, initially applied to the Euclidean space, further explorations led to non-Euclidean space, and metric and topological spaces. Riemannian geometry studies Riemannian manifolds, smooth manifolds with a Riemannian metric and this is a concept of distance expressed by means of a smooth positive definite symmetric bilinear form defined on the tangent space at each point. Various concepts based on length, such as the arc length of curves, area of plane regions, the notion of a directional derivative of a function from multivariable calculus is extended in Riemannian geometry to the notion of a covariant derivative of a tensor. Many concepts and techniques of analysis and differential equations have been generalized to the setting of Riemannian manifolds, a distance-preserving diffeomorphism between Riemannian manifolds is called an isometry. This notion can also be defined locally, i. e. for small neighborhoods of points, any two regular curves are locally isometric. In higher dimensions, the Riemann curvature tensor is an important pointwise invariant associated with a Riemannian manifold that measures how close it is to being flat, an important class of Riemannian manifolds is the Riemannian symmetric spaces, whose curvature is not necessarily constant. These are the closest analogues to the plane and space considered in Euclidean and non-Euclidean geometry. Pseudo-Riemannian geometry generalizes Riemannian geometry to the case in which the metric tensor need not be positive-definite, a special case of this is a Lorentzian manifold, which is the mathematical basis of Einsteins general relativity theory of gravity. Finsler geometry has the Finsler manifold as the object of study. This is a manifold with a Finsler metric, i. e. a Banach norm defined on each tangent space. Riemannian manifolds are special cases of the more general Finsler manifolds. A Finsler structure on a manifold M is a function F, TM → [0, ∞) such that, F = |m|F for all x, y in TM, F is infinitely differentiable in TM −, symplectic geometry is the study of symplectic manifolds. A symplectic manifold is an almost symplectic manifold for which the symplectic form ω is closed, a diffeomorphism between two symplectic manifolds which preserves the symplectic form is called a symplectomorphism. Non-degenerate skew-symmetric bilinear forms can only exist on even-dimensional vector spaces, in dimension 2, a symplectic manifold is just a surface endowed with an area form and a symplectomorphism is an area-preserving diffeomorphism
Differential geometry
–
A triangle immersed in a saddle-shape plane (a hyperbolic paraboloid), as well as two diverging ultraparallel lines.
95.
Algebraic geometry
–
Algebraic geometry is a branch of mathematics, classically studying zeros of multivariate polynomials. Modern algebraic geometry is based on the use of abstract algebraic techniques, mainly from commutative algebra, the fundamental objects of study in algebraic geometry are algebraic varieties, which are geometric manifestations of solutions of systems of polynomial equations. A point of the plane belongs to a curve if its coordinates satisfy a given polynomial equation. Basic questions involve the study of the points of special interest like the points, the inflection points. More advanced questions involve the topology of the curve and relations between the curves given by different equations, Algebraic geometry occupies a central place in modern mathematics and has multiple conceptual connections with such diverse fields as complex analysis, topology and number theory. In the 20th century, algebraic geometry split into several subareas, the mainstream of algebraic geometry is devoted to the study of the complex points of the algebraic varieties and more generally to the points with coordinates in an algebraically closed field. The study of the points of a variety with coordinates in the field of the rational numbers or in a number field became arithmetic geometry. The study of the points of an algebraic variety is the subject of real algebraic geometry. A large part of singularity theory is devoted to the singularities of algebraic varieties, with the rise of the computers, a computational algebraic geometry area has emerged, which lies at the intersection of algebraic geometry and computer algebra. It consists essentially in developing algorithms and software for studying and finding the properties of explicitly given algebraic varieties and this means that a point of such a scheme may be either a usual point or a subvariety. This approach also enables a unification of the language and the tools of algebraic geometry, mainly concerned with complex points. Wiless proof of the longstanding conjecture called Fermats last theorem is an example of the power of this approach. For instance, the sphere in three-dimensional Euclidean space R3 could be defined as the set of all points with x 2 + y 2 + z 2 −1 =0. A slanted circle in R3 can be defined as the set of all points which satisfy the two polynomial equations x 2 + y 2 + z 2 −1 =0, x + y + z =0, first we start with a field k. In classical algebraic geometry, this field was always the complex numbers C and we consider the affine space of dimension n over k, denoted An. When one fixes a system, one may identify An with kn. The purpose of not working with kn is to emphasize that one forgets the vector space structure that kn carries, the property of a function to be polynomial does not depend on the choice of a coordinate system in An. When a coordinate system is chosen, the functions on the affine n-space may be identified with the ring of polynomial functions in n variables over k
Algebraic geometry
–
This Togliatti surface is an algebraic surface of degree five. The picture represents a portion of its real locus.
96.
Convex optimization
–
Convex minimization, a subfield of optimization, studies the problem of minimizing convex functions over convex sets. The convexity property can make optimization in some sense easier than the general case - for example, the convexity of f makes the powerful tools of convex analysis applicable. With recent improvements in computing and in theory, convex minimization is nearly as straightforward as linear programming. Many optimization problems can be reformulated as convex minimization problems, for example, the problem of maximizing a concave function f can be re-formulated equivalently as a problem of minimizing the function -f, which is convex. The general form of a problem is to find some x ∗ ∈ X such that f = min, for some feasible set X ⊂ R n. The optimization problem is called an optimization problem if X is a convex set. The following statements are true about the convex minimization problem, if a local minimum exists, the set of all minima is convex. for each strictly convex function, if the function has a minimum, then the minimum is unique. Standard form is the usual and most intuitive form of describing a convex minimization problem, in practice, the terms linear and affine are often used interchangeably. Such constraints can be expressed in the h i = a i T x + b i. A convex minimization problem is thus written as x f s u b j e c t t o g i ≤0, i =1, …, m h i =0, i =1, …, p. Note that every equality constraint h =0 can be replaced by a pair of inequality constraints h ≤0 and − h ≤0. Therefore, for theoretical purposes, equality constraints are redundant, however, following from this fact, it is easy to understand why h i =0 has to be affine as opposed to merely being convex. If h i is convex, h i ≤0 is convex, therefore, the only way for h i =0 to be convex is for h i to be affine. Then the domain X is, X =, the Lagrangian function for the problem is L = λ0 f + λ1 g 1 + ⋯ + λ m g m. If there exists a strictly feasible point, that is, a point z satisfying g 1, …, g m <0, dual subgradient methods are subgradient methods applied to a dual problem. The drift-plus-penalty method is similar to the dual subgradient method, problems with convex level sets can be efficiently minimized, in theory. Yurii Nesterov proved that quasi-convex minimization problems could be solved efficiently, however, such theoretically efficient methods use divergent-series stepsize rules, which were first developed for classical subgradient methods. Solving even close-to-convex but non-convex problems can be computationally intractable, minimizing a unimodal function is intractable, regardless of the smoothness of the function, according to results of Ivanov
Convex optimization
–
Unconstrained nonlinear: Methods calling …
97.
Manifold
–
In mathematics, a manifold is a topological space that locally resembles Euclidean space near each point. More precisely, each point of a manifold has a neighbourhood that is homeomorphic to the Euclidean space of dimension n. One-dimensional manifolds include lines and circles, but not figure eights, two-dimensional manifolds are also called surfaces. Although a manifold locally resembles Euclidean space, globally it may not, for example, the surface of the sphere is not a Euclidean space, but in a region it can be charted by means of map projections of the region into the Euclidean plane. When a region appears in two neighbouring charts, the two representations do not coincide exactly and a transformation is needed to pass from one to the other, Manifolds naturally arise as solution sets of systems of equations and as graphs of functions. One important class of manifolds is the class of differentiable manifolds and this differentiable structure allows calculus to be done on manifolds. A Riemannian metric on a manifold allows distances and angles to be measured, symplectic manifolds serve as the phase spaces in the Hamiltonian formalism of classical mechanics, while four-dimensional Lorentzian manifolds model spacetime in general relativity. After a line, the circle is the simplest example of a topological manifold, Topology ignores bending, so a small piece of a circle is treated exactly the same as a small piece of a line. Consider, for instance, the top part of the circle, x2 + y2 =1. Any point of this arc can be described by its x-coordinate. So, projection onto the first coordinate is a continuous, and invertible, mapping from the arc to the open interval. Such functions along with the regions they map are called charts. Similarly, there are charts for the bottom, left, and right parts of the circle, together, these parts cover the whole circle and the four charts form an atlas for the circle. The top and right charts, χtop and χright respectively, overlap in their domain, Each map this part into the interval, though differently. Let a be any number in, then, T = χ r i g h t = χ r i g h t =1 − a 2 Such a function is called a transition map. The top, bottom, left, and right charts show that the circle is a manifold, charts need not be geometric projections, and the number of charts is a matter of some choice. These two charts provide a second atlas for the circle, with t =1 s Each chart omits a single point, either for s or for t and it can be proved that it is not possible to cover the full circle with a single chart. Viewed using calculus, the transition function T is simply a function between open intervals, which gives a meaning to the statement that T is differentiable
Manifold
–
The surface of the Earth requires (at least) two charts to include every point. Here the globe is decomposed into charts around the North and South Poles.
Manifold
–
The real projective plane is a two-dimensional manifold that cannot be realized in three dimensions without self-intersection, shown here as Boy's surface.
98.
Topological groups
–
A topological group is a mathematical object with both an algebraic structure and a topological structure. Thus, one may perform algebraic operations, because of the group structure, Topological groups, along with continuous group actions, are used to study continuous symmetries, which have many applications, for example, in physics. A topological group, G, is a space which is also a group such that the group operations of product, G × G → G, ↦ x y and taking inverses. Here G × G is viewed as a space with the product topology. Although not part of this definition, many require that the topology on G be Hausdorff. The reasons, and some equivalent conditions, are discussed below, in any case, any topological group can be made Hausdorff by taking an appropriate canonical quotient. Note that the axioms are given in terms of the maps, a homomorphism of topological groups means a continuous group homomorphism G → H. An isomorphism of groups is a group isomorphism which is also a homeomorphism of the underlying topological spaces. This is stronger than simply requiring a continuous group isomorphism—the inverse must also be continuous, there are examples of topological groups which are isomorphic as ordinary groups but not as topological groups. Indeed, any topological group is also a topological group when considered with the discrete topology. The underlying groups are the same, but as topological groups there is not an isomorphism, Topological groups, together with their homomorphisms, form a category. Every group can be made into a topological group by considering it with the discrete topology. In this sense, the theory of topological groups subsumes that of ordinary groups, the real numbers, R with the usual topology form a topological group under addition. More generally, Euclidean n-space Rn is a group under addition. Some other examples of topological groups are the circle group S1. The classical groups are important examples of topological groups. Another classical group is the orthogonal group O, the group of all maps from Rn to itself that preserve the length of all vectors. The orthogonal group is compact as a topological space, much of Euclidean geometry can be viewed as studying the structure of the orthogonal group, or the closely related group O ⋉ Rn of isometries of Rn
Topological groups
–
Algebraic structure → Group theory Group theory
99.
Set-theoretic topology
–
In mathematics, set-theoretic topology is a subject that combines set theory and general topology. It focuses on questions that are independent of Zermelo–Fraenkel set theory. In the mathematical field of topology, a Dowker space is a topological space that is T4. Dowker conjectured that there were no Dowker spaces, and the conjecture was not resolved until M. E. Rudin constructed one in 1971, Rudins counterexample is a very large space and is generally not well-behaved. Zoltán Balogh gave the first ZFC construction of a small example, using PCF theory, M. Kojman and S. Shelah constructed a subspace of Rudins Dowker space of cardinality ℵ ω +1 that is also Dowker. A famous problem is the normal Moore space question, a question in general topology that was the subject of intense research, the answer to the normal Moore space question was eventually proved to be independent of ZFC. Cardinal functions are used in topology as a tool for describing various topological properties. Perhaps the simplest cardinal invariants of a topological space X are its cardinality, the weight w of a topological space X is the smallest possible cardinality of a base for X. When w ≤ ℵ0 the space X is said to be second countable, the π -weight of a space X is the smallest cardinality of a π -base for X. The character of a topological space X at a point x is the smallest cardinality of a base for x. The character of space X is χ = sup, when χ ≤ ℵ0 the space X is said to be first countable. The density d of a space X is the smallest cardinality of a subset of X. When d ≤ ℵ0 the space X is said to be separable, the Lindelöf number L of a space X is the smallest infinite cardinality such that every open cover has a subcover of cardinality no more than L. When L = ℵ0 the space X is said to be a Lindelöf space, the cellularity of a space X is c = sup. The Hereditary cellularity is the least upper bound of cellularities of its subsets, the tightness of a space X is t = sup. When t = ℵ0 the space X is said to be generated or countably tight. Since it is a theorem of ZFC that MA fails, the Martins axiom is stated as, Martins axiom, For every k < c, in this case, an antichain is a subset A of P such that any two distinct members of A are incompatible. This differs from, for example, the notion of antichain in the context of trees, MA is false, is a compact Hausdorff space, which is separable and so ccc
Set-theoretic topology
–
The space of integers has cardinality, while the real numbers has cardinality. The topologies of both spaces have cardinality. These are examples of cardinal functions, a topic in set-theoretic topology.
100.
Axiomatic set theory
–
Set theory is a branch of mathematical logic that studies sets, which informally are collections of objects. Although any type of object can be collected into a set, set theory is applied most often to objects that are relevant to mathematics, the language of set theory can be used in the definitions of nearly all mathematical objects. The modern study of set theory was initiated by Georg Cantor, Set theory is commonly employed as a foundational system for mathematics, particularly in the form of Zermelo–Fraenkel set theory with the axiom of choice. Beyond its foundational role, set theory is a branch of mathematics in its own right, contemporary research into set theory includes a diverse collection of topics, ranging from the structure of the real number line to the study of the consistency of large cardinals. Mathematical topics typically emerge and evolve through interactions among many researchers, Set theory, however, was founded by a single paper in 1874 by Georg Cantor, On a Property of the Collection of All Real Algebraic Numbers. Since the 5th century BC, beginning with Greek mathematician Zeno of Elea in the West and early Indian mathematicians in the East, especially notable is the work of Bernard Bolzano in the first half of the 19th century. Modern understanding of infinity began in 1867–71, with Cantors work on number theory, an 1872 meeting between Cantor and Richard Dedekind influenced Cantors thinking and culminated in Cantors 1874 paper. Cantors work initially polarized the mathematicians of his day, while Karl Weierstrass and Dedekind supported Cantor, Leopold Kronecker, now seen as a founder of mathematical constructivism, did not. This utility of set theory led to the article Mengenlehre contributed in 1898 by Arthur Schoenflies to Kleins encyclopedia, in 1899 Cantor had himself posed the question What is the cardinal number of the set of all sets. Russell used his paradox as a theme in his 1903 review of continental mathematics in his The Principles of Mathematics, in 1906 English readers gained the book Theory of Sets of Points by William Henry Young and his wife Grace Chisholm Young, published by Cambridge University Press. The momentum of set theory was such that debate on the paradoxes did not lead to its abandonment, the work of Zermelo in 1908 and Abraham Fraenkel in 1922 resulted in the set of axioms ZFC, which became the most commonly used set of axioms for set theory. The work of such as Henri Lebesgue demonstrated the great mathematical utility of set theory. Set theory is used as a foundational system, although in some areas category theory is thought to be a preferred foundation. Set theory begins with a binary relation between an object o and a set A. If o is a member of A, the notation o ∈ A is used, since sets are objects, the membership relation can relate sets as well. A derived binary relation between two sets is the relation, also called set inclusion. If all the members of set A are also members of set B, then A is a subset of B, for example, is a subset of, and so is but is not. As insinuated from this definition, a set is a subset of itself, for cases where this possibility is unsuitable or would make sense to be rejected, the term proper subset is defined
Axiomatic set theory
–
Georg Cantor
Axiomatic set theory
–
A Venn diagram illustrating the intersection of two sets.
101.
Kepler conjecture
–
The Kepler conjecture, named after the 17th-century mathematician and astronomer Johannes Kepler, is a mathematical conjecture about sphere packing in three-dimensional Euclidean space. It says that no arrangement of equally sized spheres filling space has an average density than that of the cubic close packing. The density of these arrangements is around 74. 05%, in 1998 Thomas Hales, following an approach suggested by Fejes Tóth, announced that he had a proof of the Kepler conjecture. Hales proof is a proof by exhaustion involving the checking of many cases using complex computer calculations. Referees have said that they are 99% certain of the correctness of Hales proof, in 2014, the Flyspeck project team, headed by Hales, announced the completion of a formal proof of the Kepler conjecture using a combination of the Isabelle and HOL Light proof assistants. Imagine filling a container with small equal-sized spheres. The density of the arrangement is equal to the volume of the spheres divided by the volume of the container. To maximize the number of spheres in the means to create an arrangement with the highest possible density. Experiment shows that dropping the spheres in randomly will achieve a density of around 65%, however, a higher density can be achieved by carefully arranging the spheres as follows. Start with a layer of spheres in a lattice, then put the next layer of spheres in the lowest points you can find above the first layer. The conjecture was first stated by Johannes Kepler in his paper On the six-cornered snowflake and he had started to study arrangements of spheres as a result of his correspondence with the English mathematician and astronomer Thomas Harriot in 1606. Harriot was a friend and assistant of Sir Walter Raleigh, who had set Harriot the problem of determining how best to stack cannonballs on the decks of his ships. Harriot published a study of various stacking patterns in 1591, and this meant that any packing arrangement that disproved the Kepler conjecture would have to be an irregular one. But eliminating all possible arrangements is very difficult, and this is what made the Kepler conjecture so hard to prove. After Gauss, no progress was made towards proving the Kepler conjecture in the nineteenth century. In 1900 David Hilbert included it in his list of twenty three unsolved problems of mathematics—it forms part of Hilberts eighteenth problem, the next step toward a solution was taken by László Fejes Tóth. Fejes Tóth showed that the problem of determining the density of all arrangements could be reduced to a finite number of calculations. This meant that a proof by exhaustion was, in principle, possible, as Fejes Tóth realised, a fast enough computer could turn this theoretical result into a practical approach to the problem
Kepler conjecture
–
One of the diagrams from Strena Seu de Nive Sexangula, illustrating the Kepler conjecture
Kepler conjecture
–
Diagrams of cubic close packing (left) and hexagonal close packing (right).
102.
Measure theory
–
In mathematical analysis, a measure on a set is a systematic way to assign a number to each suitable subset of that set, intuitively interpreted as its size. In this sense, a measure is a generalization of the concepts of length, area, for instance, the Lebesgue measure of the interval in the real numbers is its length in the everyday sense of the word – specifically,1. Technically, a measure is a function that assigns a real number or +∞ to subsets of a set X. It must further be countably additive, the measure of a subset that can be decomposed into a finite number of smaller disjoint subsets, is the sum of the measures of the smaller subsets. In general, if one wants to associate a consistent size to each subset of a set while satisfying the other axioms of a measure. This problem was resolved by defining measure only on a sub-collection of all subsets, the so-called measurable subsets and this means that countable unions, countable intersections and complements of measurable subsets are measurable. Non-measurable sets in a Euclidean space, on which the Lebesgue measure cannot be defined consistently, are complicated in the sense of being badly mixed up with their complement. Indeed, their existence is a consequence of the axiom of choice. Measure theory was developed in stages during the late 19th and early 20th centuries by Émile Borel, Henri Lebesgue, Johann Radon. The main applications of measures are in the foundations of the Lebesgue integral, in Andrey Kolmogorovs axiomatisation of probability theory, probability theory considers measures that assign to the whole set the size 1, and considers measurable subsets to be events whose probability is given by the measure. Ergodic theory considers measures that are invariant under, or arise naturally from, let X be a set and Σ a σ-algebra over X. A function μ from Σ to the real number line is called a measure if it satisfies the following properties, Non-negativity. Countable additivity, For all countable collections i =1 ∞ of pairwise disjoint sets in Σ, μ = ∑ k =1 ∞ μ One may require that at least one set E has finite measure. Then the empty set automatically has measure zero because of countable additivity, because μ = μ = μ + μ + μ + …, which implies that μ =0. If only the second and third conditions of the definition of measure above are met, the pair is called a measurable space, the members of Σ are called measurable sets. If and are two spaces, then a function f, X → Y is called measurable if for every Y-measurable set B ∈ Σ Y. A triple is called a measure space, a probability measure is a measure with total measure one – i. e. A probability space is a space with a probability measure
Measure theory
–
Informally, a measure has the property of being monotone in the sense that if A is a subset of B, the measure of A is less than or equal to the measure of B. Furthermore, the measure of the empty set is required to be 0.
103.
Complex analysis
–
Complex analysis, traditionally known as the theory of functions of a complex variable, is the branch of mathematical analysis that investigates functions of complex numbers. As a differentiable function of a variable is equal to the sum of its Taylor series. Complex analysis is one of the branches in mathematics, with roots in the 19th century. Important mathematicians associated with complex analysis include Euler, Gauss, Riemann, Cauchy, Weierstrass, Complex analysis, in particular the theory of conformal mappings, has many physical applications and is also used throughout analytic number theory. In modern times, it has very popular through a new boost from complex dynamics. Another important application of analysis is in string theory which studies conformal invariants in quantum field theory. A complex function is one in which the independent variable and the dependent variable are complex numbers. More precisely, a function is a function whose domain. In other words, the components of the f, u = u and v = v can be interpreted as real-valued functions of the two real variables, x and y. The basic concepts of complex analysis are often introduced by extending the elementary real functions into the complex domain, holomorphic functions are complex functions, defined on an open subset of the complex plane, that are differentiable. In the context of analysis, the derivative of f at z 0 is defined to be f ′ = lim z → z 0 f − f z − z 0, z ∈ C. Although superficially similar in form to the derivative of a real function, in particular, for this limit to exist, the value of the difference quotient must approach the same complex number, regardless of the manner in which we approach z 0 in the complex plane. Consequently, complex differentiability has much stronger consequences than usual differentiability, for instance, holomorphic functions are infinitely differentiable, whereas most real differentiable functions are not. For this reason, holomorphic functions are referred to as analytic functions. Such functions that are holomorphic everywhere except a set of isolated points are known as meromorphic functions. On the other hand, the functions z ↦ ℜ, z ↦ | z |, an important property that characterizes holomorphic functions is the relationship between the partial derivatives of their real and imaginary components, known as the Cauchy-Riemann conditions. If f, C → C, defined by f = f = u + i v, here, the differential operator ∂ / ∂ z ¯ is defined as. In terms of the real and imaginary parts of the function, u and v, this is equivalent to the pair of equations u x = v y and u y = − v x, where the subscripts indicate partial differentiation
Complex analysis
–
Plot of the function f (x) = (x 2 − 1)(x − 2 − i) 2 / (x 2 + 2 + 2 i). The hue represents the function argument, while the brightness represents the magnitude.
Complex analysis
–
The Mandelbrot set, a fractal.
104.
Differential equation
–
A differential equation is a mathematical equation that relates some function with its derivatives. In applications, the functions usually represent physical quantities, the derivatives represent their rates of change, because such relations are extremely common, differential equations play a prominent role in many disciplines including engineering, physics, economics, and biology. In pure mathematics, differential equations are studied from different perspectives. Only the simplest differential equations are solvable by explicit formulas, however, if a self-contained formula for the solution is not available, the solution may be numerically approximated using computers. Differential equations first came into existence with the invention of calculus by Newton, jacob Bernoulli proposed the Bernoulli differential equation in 1695. This is a differential equation of the form y ′ + P y = Q y n for which the following year Leibniz obtained solutions by simplifying it. Historically, the problem of a string such as that of a musical instrument was studied by Jean le Rond dAlembert, Leonhard Euler, Daniel Bernoulli. In 1746, d’Alembert discovered the wave equation, and within ten years Euler discovered the three-dimensional wave equation. The Euler–Lagrange equation was developed in the 1750s by Euler and Lagrange in connection with their studies of the tautochrone problem. This is the problem of determining a curve on which a particle will fall to a fixed point in a fixed amount of time. Lagrange solved this problem in 1755 and sent the solution to Euler, both further developed Lagranges method and applied it to mechanics, which led to the formulation of Lagrangian mechanics. Contained in this book was Fouriers proposal of his heat equation for conductive diffusion of heat and this partial differential equation is now taught to every student of mathematical physics. For example, in mechanics, the motion of a body is described by its position. Newtons laws allow one to express these variables dynamically as an equation for the unknown position of the body as a function of time. In some cases, this equation may be solved explicitly. An example of modelling a real world problem using differential equations is the determination of the velocity of a ball falling through the air, considering only gravity, the balls acceleration towards the ground is the acceleration due to gravity minus the acceleration due to air resistance. Gravity is considered constant, and air resistance may be modeled as proportional to the balls velocity and this means that the balls acceleration, which is a derivative of its velocity, depends on the velocity. Finding the velocity as a function of time involves solving a differential equation, Differential equations can be divided into several types
Differential equation
–
Navier–Stokes differential equations used to simulate airflow around an obstruction.
105.
Dynamical system
–
In mathematics, a dynamical system is a system in which a function describes the time dependence of a point in a geometrical space. Examples include the models that describe the swinging of a clock pendulum, the flow of water in a pipe. At any given time, a system has a state given by a tuple of real numbers that can be represented by a point in an appropriate state space. The evolution rule of the system is a function that describes what future states follow from the current state. Often the function is deterministic, that is, for a time interval only one future state follows from the current state. However, some systems are stochastic, in random events also affect the evolution of the state variables. In physics, a system is described as a particle or ensemble of particles whose state varies over time. In order to make a prediction about the future behavior. Dynamical systems are a part of chaos theory, logistic map dynamics, bifurcation theory, the self-assembly process. The concept of a system has its origins in Newtonian mechanics. To determine the state for all future times requires iterating the relation many times—each advancing time a small step, the iteration procedure is referred to as solving the system or integrating the system. If the system can be solved, given a point it is possible to determine all its future positions. Before the advent of computers, finding an orbit required sophisticated mathematical techniques, numerical methods implemented on electronic computing machines have simplified the task of determining the orbits of a dynamical system. For simple dynamical systems, knowing the trajectory is often sufficient, the difficulties arise because, The systems studied may only be known approximately—the parameters of the system may not be known precisely or terms may be missing from the equations. The approximations used bring into question the validity or relevance of numerical solutions, to address these questions several notions of stability have been introduced in the study of dynamical systems, such as Lyapunov stability or structural stability. The stability of the dynamical system implies that there is a class of models or initial conditions for which the trajectories would be equivalent, the operation for comparing orbits to establish their equivalence changes with the different notions of stability. The type of trajectory may be more important than one particular trajectory, some trajectories may be periodic, whereas others may wander through many different states of the system. Applications often require enumerating these classes or maintaining the system within one class, classifying all possible trajectories has led to the qualitative study of dynamical systems, that is, properties that do not change under coordinate changes
Dynamical system
–
The Lorenz attractor arises in the study of the Lorenz Oscillator, a dynamical system.
106.
Chaos theory
–
Chaos theory is a branch of mathematics focused on the behavior of dynamical systems that are highly sensitive to initial conditions. This happens even though these systems are deterministic, meaning that their behavior is fully determined by their initial conditions. In other words, the nature of these systems does not make them predictable. This behavior is known as chaos, or simply chaos. The theory was summarized by Edward Lorenz as, Chaos, When the present determines the future, Chaotic behavior exists in many natural systems, such as weather and climate. It also occurs spontaneously in some systems with components, such as road traffic. This behavior can be studied through analysis of a mathematical model, or through analytical techniques such as recurrence plots. Chaos theory has applications in several disciplines, including meteorology, sociology, physics, environmental science, computer science, engineering, economics, biology, ecology, the theory formed the basis for such fields of study as complex dynamical systems, edge of chaos theory, self-assembly process. Chaos theory concerns deterministic systems whose behavior can in principle be predicted, Chaotic systems are predictable for a while and then appear to become random. Some examples of Lyapunov times are, chaotic electrical circuits, about 1 millisecond, weather systems, a few days, in chaotic systems, the uncertainty in a forecast increases exponentially with elapsed time. Hence, mathematically, doubling the forecast time more than squares the proportional uncertainty in the forecast and this means, in practice, a meaningful prediction cannot be made over an interval of more than two or three times the Lyapunov time. When meaningful predictions cannot be made, the system appears random, in common usage, chaos means a state of disorder. However, in theory, the term is defined more precisely. Although no universally accepted mathematical definition of chaos exists, a commonly used definition originally formulated by Robert L, in these cases, while it is often the most practically significant property, sensitivity to initial conditions need not be stated in the definition. If attention is restricted to intervals, the second property implies the other two, an alternative, and in general weaker, definition of chaos uses only the first two properties in the above list. Sensitivity to initial conditions means that each point in a system is arbitrarily closely approximated by other points with significantly different future paths. Thus, a small change, or perturbation, of the current trajectory may lead to significantly different future behavior. C. Entitled Predictability, Does the Flap of a Butterflys Wings in Brazil set off a Tornado in Texas, the flapping wing represents a small change in the initial condition of the system, which causes a chain of events leading to large-scale phenomena
Chaos theory
–
The Lorenz attractor displays chaotic behavior. These two plots demonstrate sensitive dependence on initial conditions within the region of phase space occupied by the attractor.
Chaos theory
–
A plot of Lorenz attractor for values r = 28, σ = 10, b = 8/3
Chaos theory
–
Turbulence in the tip vortex from an airplane wing. Studies of the critical point beyond which a system creates turbulence were important for chaos theory, analyzed for example by the Soviet physicist Lev Landau, who developed the Landau-Hopf theory of turbulence. David Ruelle and Floris Takens later predicted, against Landau, that fluid turbulence could develop through a strange attractor, a main concept of chaos theory.
Chaos theory
–
A conus textile shell, similar in appearance to Rule 30, a cellular automaton with chaotic behaviour.
107.
Professional
–
A professional is a member of a profession or any person who earns their living from a specified professional activity. In addition, most professionals are subject to strict codes of conduct, Professional standards of practice and ethics for a particular field are typically agreed upon and maintained through widely recognized professional associations, such as the IEEE. Some definitions of professional limit this term to those professions that serve some important aspect of public interest, in narrow usage, not all expertise is considered a profession. Although sometimes incorrectly referred to as professions, occupations such as skilled construction, the completion of an apprenticeship is generally associated with skilled labour, or trades such as carpenter, electrician, mason, painter, plumber and other similar occupations. A related distinction would be that a professional does mainly mental work, although professional training appears to be ideologically neutral, it may be biased towards those with higher class backgrounds and a formal education. His evidence is both qualitative and quantitative, including examinations, industry statistics and personal accounts of trainees. A key theoretical dispute arises from the observation that established professions are subject to strict codes of conduct, some have thus argued that these codes of conduct, agreed upon and maintained through widely recognized professional associations, are a key element of what constitutes any profession. Thus, as people became more and more specialized in their trade, they began to profess their skill to others, with a reputation to uphold, trusted workers of a society who have a specific trade are considered professionals. Ironically, the usage of the word profess declined from the late 1800s to the 1950s, centre for the Study of Professions Organizational culture Professional boundaries Professional sports
Professional
–
Doctors in many Western countries take the Hippocratic Oath upon entering the profession, as a symbol of their commitment to upholding a number of ethical and moral standards.
108.
Probability theory
–
Probability theory is the branch of mathematics concerned with probability, the analysis of random phenomena. It is not possible to predict precisely results of random events, two representative mathematical results describing such patterns are the law of large numbers and the central limit theorem. As a mathematical foundation for statistics, probability theory is essential to human activities that involve quantitative analysis of large sets of data. Methods of probability theory also apply to descriptions of complex systems given only partial knowledge of their state, a great discovery of twentieth century physics was the probabilistic nature of physical phenomena at atomic scales, described in quantum mechanics. Christiaan Huygens published a book on the subject in 1657 and in the 19th century, initially, probability theory mainly considered discrete events, and its methods were mainly combinatorial. Eventually, analytical considerations compelled the incorporation of continuous variables into the theory and this culminated in modern probability theory, on foundations laid by Andrey Nikolaevich Kolmogorov. Kolmogorov combined the notion of space, introduced by Richard von Mises. This became the mostly undisputed axiomatic basis for modern probability theory, most introductions to probability theory treat discrete probability distributions and continuous probability distributions separately. The more mathematically advanced measure theory-based treatment of probability covers the discrete, continuous, consider an experiment that can produce a number of outcomes. The set of all outcomes is called the space of the experiment. The power set of the space is formed by considering all different collections of possible results. For example, rolling an honest die produces one of six possible results, one collection of possible results corresponds to getting an odd number. Thus, the subset is an element of the set of the sample space of die rolls. In this case, is the event that the die falls on some odd number, If the results that actually occur fall in a given event, that event is said to have occurred. Probability is a way of assigning every event a value between zero and one, with the requirement that the event made up of all possible results be assigned a value of one, the probability that any one of the events, or will occur is 5/6. This is the same as saying that the probability of event is 5/6 and this event encompasses the possibility of any number except five being rolled. The mutually exclusive event has a probability of 1/6, and the event has a probability of 1, discrete probability theory deals with events that occur in countable sample spaces. Modern definition, The modern definition starts with a finite or countable set called the sample space, which relates to the set of all possible outcomes in classical sense, denoted by Ω
Probability theory
–
The normal distribution, a continuous probability distribution.
Probability theory
–
The Poisson distribution, a discrete probability distribution.
109.
Scientific method
–
The scientific method is a body of techniques for investigating phenomena, acquiring new knowledge, or correcting and integrating previous knowledge. To be termed scientific, a method of inquiry is commonly based on empirical or measurable evidence subject to specific principles of reasoning, experiments need to be designed to test hypotheses. The most important part of the method is the experiment. The scientific method is a process, which usually begins with observations about the natural world. Human beings are naturally inquisitive, so often come up with questions about things they see or hear. The best hypotheses lead to predictions that can be tested in various ways, in general, the strongest tests of hypotheses come from carefully controlled and replicated experiments that gather empirical data. Depending on how well the tests match the predictions, the hypothesis may require refinement. If a particular hypothesis becomes very well supported a theory may be developed. Although procedures vary from one field of inquiry to another, identifiable features are shared in common between them. The overall process of the method involves making conjectures, deriving predictions from them as logical consequences. A hypothesis is a conjecture, based on knowledge obtained while formulating the question, the hypothesis might be very specific or it might be broad. Scientists then test hypotheses by conducting experiments, the purpose of an experiment is to determine whether observations agree with or conflict with the predictions derived from a hypothesis. Experiments can take anywhere from a college lab to CERNs Large Hadron Collider. There are difficulties in a statement of method, however. Though the scientific method is presented as a fixed sequence of steps. Not all steps take place in scientific inquiry, and are not always in the same order. Some philosophers and scientists have argued there is no scientific method, such as Lee Smolin. Nola and Sankey remark that For some, the idea of a theory of scientific method is yester-years debate
Scientific method
–
Johannes Kepler (1571–1630). "Kepler shows his keen logical sense in detailing the whole process by which he finally arrived at the true orbit. This is the greatest piece of Retroductive reasoning ever performed." – C. S. Peirce, c. 1896, on Kepler's reasoning through explanatory hypotheses
Scientific method
–
Ibn al-Haytham (Alhazen), 965–1039 Iraq. A polymath, considered by some to be the father of modern scientific methodology, due to his emphasis on experimental data and reproducibility of its results.
Scientific method
–
According to Morris Kline, "Modern science owes its present flourishing state to a new scientific method which was fashioned almost entirely by Galileo Galilei " (1564−1642). Dudley Shapere takes a more measured view of Galileo's contribution.
Scientific method
–
Flying gallop falsified; see image below.
110.
Statistical hypothesis testing
–
A statistical hypothesis, sometimes called confirmatory data analysis, is a hypothesis that is testable on the basis of observing a process that is modeled via a set of random variables. A statistical hypothesis test is a method of statistical inference, commonly, two statistical data sets are compared, or a data set obtained by sampling is compared against a synthetic data set from an idealized model. Hypothesis tests are used in determining outcomes of a study would lead to a rejection of the null hypothesis for a pre-specified level of significance. The most common techniques are based on either Akaike information criterion or Bayes factor. Confirmatory data analysis can be contrasted with exploratory data analysis, which may not have pre-specified hypotheses, Statistical hypothesis testing is a key technique of both frequentist inference and Bayesian inference, although the two types of inference have notable differences. Statistical hypothesis tests define a procedure that controls the probability of incorrectly deciding that a position is incorrect. The procedure is based on how likely it would be for a set of observations to occur if the hypothesis were true. Note that this probability of making a decision is not the probability that the null hypothesis is true. This contrasts with other techniques of decision theory in which the null. One naïve Bayesian approach to hypothesis testing is to base decisions on the posterior probability, a number of other approaches to reaching a decision based on data are available via decision theory and optimal decisions, some of which have desirable properties. Hypothesis testing, though, is a dominant approach to analysis in many fields of science. Extensions to the theory of testing include the study of the power of tests. Such considerations can be used for the purpose of sample size determination prior to the collection of data, in the statistics literature, statistical hypothesis testing plays a fundamental role. The usual line of reasoning is as follows, There is an initial hypothesis of which the truth is unknown. The first step is to state the relevant null and alternative hypotheses and this is important, as mis-stating the hypotheses will muddy the rest of the process. This is equally important as invalid assumptions will mean that the results of the test are invalid, decide which test is appropriate, and state the relevant test statistic T. Derive the distribution of the test statistic under the hypothesis from the assumptions. In standard cases this will be a well-known result, for example, the test statistic might follow a Students t distribution or a normal distribution
Statistical hypothesis testing
–
A likely originator of the "hybrid" method of hypothesis testing, as well as the use of "nil" null hypotheses, is E.F. Lindquist in his statistics textbook: Lindquist, E.F. (1940) Statistical Analysis In Educational Research. Boston: Houghton Mifflin.
111.
Statistical decision theory
–
Decision theory is the study of the reasoning underlying an agents choices. Decision theory is a topic, studied by economists, statisticians, psychologists, political and social scientists. Empirical applications of this theory are usually done with the help of statistical and econometric methods, especially via the so-called choice models. Estimation of such models is usually done via parametric, semi-parametric and non-parametric maximum likelihood methods, the practical application of this prescriptive approach is called decision analysis, and is aimed at finding tools, methodologies and software to help people make better decisions. In contrast, positive or descriptive decision theory is concerned with describing observed behaviors under the assumption that the agents are behaving under some consistent rules. The prescriptions or predictions about behaviour that positive decision theory produces allow for further tests of the kind of decision-making that occurs in practice, there is a thriving dialogue with experimental economics, which uses laboratory and field experiments to evaluate and inform theory. The area of choice under uncertainty represents the heart of decision theory and he gives an example in which a Dutch merchant is trying to decide whether to insure a cargo being sent from Amsterdam to St Petersburg in winter. In his solution, he defines a utility function and computes expected utility rather than expected financial value, the phrase decision theory itself was used in 1950 by E. L. Lehmann. At this time, von Neumann and Morgenstern theory of expected utility proved that expected utility maximization followed from basic postulates about rational behavior, the work of Maurice Allais and Daniel Ellsberg showed that human behavior has systematic and sometimes important departures from expected-utility maximization. The prospect theory of Daniel Kahneman and Amos Tversky renewed the study of economic behavior with less emphasis on rationality presuppositions. Pascals Wager is an example of a choice under uncertainty. Intertemporal choice is concerned with the kind of choice where different actions lead to outcomes that are realised at different points in time, what is the optimal thing to do. The answer depends partly on factors such as the rates of interest and inflation, the persons life expectancy. Some decisions are difficult because of the need to take into account how people in the situation will respond to the decision that is taken. The analysis of such decisions is more often treated under the label of game theory, rather than decision theory. From the standpoint of game theory most of the treated in decision theory are one-player games. Other areas of decision theory are concerned with decisions that are difficult simply because of their complexity, one example is the model of economic growth and resource usage developed by the Club of Rome to help politicians make real-life decisions in complex situations. Decisions are also affected by whether options are framed together or separately, one example of common and incorrect thought process is the gamblers fallacy, or believing that a random event is affected by previous random events
Statistical decision theory
–
Daniel Kahneman
112.
Hypothesis testing
–
A statistical hypothesis, sometimes called confirmatory data analysis, is a hypothesis that is testable on the basis of observing a process that is modeled via a set of random variables. A statistical hypothesis test is a method of statistical inference, commonly, two statistical data sets are compared, or a data set obtained by sampling is compared against a synthetic data set from an idealized model. Hypothesis tests are used in determining outcomes of a study would lead to a rejection of the null hypothesis for a pre-specified level of significance. The most common techniques are based on either Akaike information criterion or Bayes factor. Confirmatory data analysis can be contrasted with exploratory data analysis, which may not have pre-specified hypotheses, Statistical hypothesis testing is a key technique of both frequentist inference and Bayesian inference, although the two types of inference have notable differences. Statistical hypothesis tests define a procedure that controls the probability of incorrectly deciding that a position is incorrect. The procedure is based on how likely it would be for a set of observations to occur if the hypothesis were true. Note that this probability of making a decision is not the probability that the null hypothesis is true. This contrasts with other techniques of decision theory in which the null. One naïve Bayesian approach to hypothesis testing is to base decisions on the posterior probability, a number of other approaches to reaching a decision based on data are available via decision theory and optimal decisions, some of which have desirable properties. Hypothesis testing, though, is a dominant approach to analysis in many fields of science. Extensions to the theory of testing include the study of the power of tests. Such considerations can be used for the purpose of sample size determination prior to the collection of data, in the statistics literature, statistical hypothesis testing plays a fundamental role. The usual line of reasoning is as follows, There is an initial hypothesis of which the truth is unknown. The first step is to state the relevant null and alternative hypotheses and this is important, as mis-stating the hypotheses will muddy the rest of the process. This is equally important as invalid assumptions will mean that the results of the test are invalid, decide which test is appropriate, and state the relevant test statistic T. Derive the distribution of the test statistic under the hypothesis from the assumptions. In standard cases this will be a well-known result, for example, the test statistic might follow a Students t distribution or a normal distribution
Hypothesis testing
–
A likely originator of the "hybrid" method of hypothesis testing, as well as the use of "nil" null hypotheses, is E.F. Lindquist in his statistics textbook: Lindquist, E.F. (1940) Statistical Analysis In Educational Research. Boston: Houghton Mifflin.
113.
Mathematical statistics
–
Mathematical techniques which are used for this include mathematical analysis, linear algebra, stochastic analysis, differential equations, and measure-theoretic probability theory. Statistical science is concerned with the planning of studies, especially with the design of randomized experiments, the initial analysis of the data from properly randomized studies often follows the study protocol. Of course, the data from a study can be analyzed to consider secondary hypotheses or to suggest new ideas. A secondary analysis of the data from a planned study uses tools from data analysis, data analysis is divided into, descriptive statistics - the part of statistics that describes data, i. e. summarises the data and their typical properties. Mathematical statistics has been inspired by and has extended many options in applied statistics, more complex experiments, such as those involving stochastic processes defined in continuous time, may demand the use of more general probability measures. A probability distribution can either be univariate or multivariate, important and commonly encountered univariate probability distributions include the binomial distribution, the hypergeometric distribution, and the normal distribution. The multivariate normal distribution is a commonly encountered multivariate distribution. g, inferential statistics are used to test hypotheses and make estimations using sample data. Whereas descriptive statistics describe a sample, inferential statistics infer predictions about a population that the sample represents. The outcome of statistical inference may be an answer to the question what should be done next, where this might be a decision about making further experiments or surveys, or about drawing a conclusion before implementing some organizational or governmental policy. For the most part, statistical inference makes propositions about populations, more generally, data about a random process is obtained from its observed behavior during a finite period of time. e. In statistics, regression analysis is a process for estimating the relationships among variables. It includes many techniques for modeling and analyzing several variables, when the focus is on the relationship between a dependent variable and one or more independent variables. Less commonly, the focus is on a quantile, or other parameter of the conditional distribution of the dependent variable given the independent variables. In all cases, the target is a function of the independent variables called the regression function. In regression analysis, it is also of interest to characterize the variation of the dependent variable around the function which can be described by a probability distribution. Many techniques for carrying out regression analysis have been developed, nonparametric regression refers to techniques that allow the regression function to lie in a specified set of functions, which may be infinite-dimensional. Nonparametric statistics are not based on parameterized families of probability distributions. They include both descriptive and inferential statistics, the typical parameters are the mean, variance, etc
Mathematical statistics
–
Illustration of linear regression on a data set. Regression analysis is an important part of mathematical statistics.
114.
Mathematical optimization
–
In mathematics, computer science and operations research, mathematical optimization, also spelled mathematical optimisation, is the selection of a best element from some set of available alternatives. The generalization of optimization theory and techniques to other formulations comprises an area of applied mathematics. Such a formulation is called a problem or a mathematical programming problem. Many real-world and theoretical problems may be modeled in this general framework, typically, A is some subset of the Euclidean space Rn, often specified by a set of constraints, equalities or inequalities that the members of A have to satisfy. The domain A of f is called the space or the choice set. The function f is called, variously, a function, a loss function or cost function, a utility function or fitness function, or, in certain fields. A feasible solution that minimizes the objective function is called an optimal solution, in mathematics, conventional optimization problems are usually stated in terms of minimization. Generally, unless both the function and the feasible region are convex in a minimization problem, there may be several local minima. While a local minimum is at least as good as any nearby points, a global minimum is at least as good as every feasible point. In a convex problem, if there is a minimum that is interior, it is also the global minimum. Optimization problems are often expressed with special notation, consider the following notation, min x ∈ R This denotes the minimum value of the objective function x 2 +1, when choosing x from the set of real numbers R. The minimum value in case is 1, occurring at x =0. Similarly, the notation max x ∈ R2 x asks for the value of the objective function 2x. In this case, there is no such maximum as the function is unbounded. This represents the value of the argument x in the interval, John Wiley & Sons, Ltd. pp. xxviii+489. (2008 Second ed. in French, Programmation mathématique, Théorie et algorithmes, Editions Tec & Doc, Paris,2008. Nemhauser, G. L. Rinnooy Kan, A. H. G. Todd, handbooks in Operations Research and Management Science. Amsterdam, North-Holland Publishing Co. pp. xiv+709, J. E. Dennis, Jr. and Robert B
Mathematical optimization
–
Graph of a paraboloid given by f(x, y) = −(x ² + y ²) + 4. The global maximum at (0, 0, 4) is indicated by a red dot.
115.
Decision science
–
Decision theory is the study of the reasoning underlying an agents choices. Decision theory is a topic, studied by economists, statisticians, psychologists, political and social scientists. Empirical applications of this theory are usually done with the help of statistical and econometric methods, especially via the so-called choice models. Estimation of such models is usually done via parametric, semi-parametric and non-parametric maximum likelihood methods, the practical application of this prescriptive approach is called decision analysis, and is aimed at finding tools, methodologies and software to help people make better decisions. In contrast, positive or descriptive decision theory is concerned with describing observed behaviors under the assumption that the agents are behaving under some consistent rules. The prescriptions or predictions about behaviour that positive decision theory produces allow for further tests of the kind of decision-making that occurs in practice, there is a thriving dialogue with experimental economics, which uses laboratory and field experiments to evaluate and inform theory. The area of choice under uncertainty represents the heart of decision theory and he gives an example in which a Dutch merchant is trying to decide whether to insure a cargo being sent from Amsterdam to St Petersburg in winter. In his solution, he defines a utility function and computes expected utility rather than expected financial value, the phrase decision theory itself was used in 1950 by E. L. Lehmann. At this time, von Neumann and Morgenstern theory of expected utility proved that expected utility maximization followed from basic postulates about rational behavior, the work of Maurice Allais and Daniel Ellsberg showed that human behavior has systematic and sometimes important departures from expected-utility maximization. The prospect theory of Daniel Kahneman and Amos Tversky renewed the study of economic behavior with less emphasis on rationality presuppositions. Pascals Wager is an example of a choice under uncertainty. Intertemporal choice is concerned with the kind of choice where different actions lead to outcomes that are realised at different points in time, what is the optimal thing to do. The answer depends partly on factors such as the rates of interest and inflation, the persons life expectancy. Some decisions are difficult because of the need to take into account how people in the situation will respond to the decision that is taken. The analysis of such decisions is more often treated under the label of game theory, rather than decision theory. From the standpoint of game theory most of the treated in decision theory are one-player games. Other areas of decision theory are concerned with decisions that are difficult simply because of their complexity, one example is the model of economic growth and resource usage developed by the Club of Rome to help politicians make real-life decisions in complex situations. Decisions are also affected by whether options are framed together or separately, one example of common and incorrect thought process is the gamblers fallacy, or believing that a random event is affected by previous random events
Decision science
–
Daniel Kahneman
116.
Control theory
–
Control theory is an interdisciplinary branch of engineering and mathematics that deals with the behavior of dynamical systems with inputs, and how their behavior is modified by feedback. The usual objective of control theory is to control a system, often called the plant, so its output follows a control signal, called the reference. To do this a controller is designed, which monitors the output, the difference between actual and desired output, called the error signal, is applied as feedback to the input of the system, to bring the actual output closer to the reference. Some topics studied in control theory are stability, controllability and observability, extensive use is usually made of a diagrammatic style known as the block diagram. Although a major application of theory is in control systems engineering. As the general theory of systems, control theory is useful wherever feedback occurs. A few examples are in physiology, electronics, climate modeling, machine design, ecosystems, navigation, neural networks, predator–prey interaction, gene expression, Control systems may be thought of as having four functions, measure, compare, compute and correct. These four functions are completed by five elements, detector, transducer, transmitter, controller, the measuring function is completed by the detector, transducer and transmitter. In practical applications these three elements are contained in one unit. A standard example of a unit is a resistance thermometer. Older controller units have been mechanical, as in a governor or a carburetor. The correct function is completed with a control element. The final control element changes an input or output in the system that affects the manipulated or controlled variable. Fundamentally, there are two types of loops, open loop control and closed loop control. In open loop control, the action from the controller is independent of the process output. A good example of this is a central heating boiler controlled only by a timer, so heat is applied for a constant time. In closed loop control, the action from the controller is dependent on the process output. A closed loop controller therefore has a loop which ensures the controller exerts a control action to give a process output the same as the Reference input or set point
Control theory
–
Centrifugal governor in a Boulton & Watt engine of 1788
117.
Mathematical problem
–
A mathematical problem is a problem that is amenable to being represented, analyzed, and possibly solved, with the methods of mathematics. This can be a problem, such as computing the orbits of the planets in the solar system, or a problem of a more abstract nature. It can also be a problem referring to the nature of mathematics itself, informal real-world mathematical problems are questions related to a concrete setting, such as Adam has five apples and gives John three. Such questions are more difficult to solve than regular mathematical exercises like 5 −3. Known as word problems, they are used in education to teach students to connect real-world situations to the abstract language of mathematics. In general, to use mathematics for solving a real-world problem and this involves abstraction from the details of the problem, and the modeller has to be careful not to lose essential aspects in translating the original problem into a mathematical one. After the problem has been solved in the world of mathematics, abstract mathematical problems arise in all fields of mathematics. While mathematicians usually study them for their own sake, by doing so results may be obtained that find application outside the realm of mathematics, theoretical physics has historically been, and remains, a rich source of inspiration. Also provably unsolvable are so-called undecidable problems, such as the problem for Turing machines. Some well-known difficult abstract problems that have been solved relatively recently are the four-colour theorem, Fermats Last Theorem, and the Poincaré conjecture. Mathematics educators using problem solving for evaluation have an issue phrased by Alan H. Schoenfeld, How can one compare test scores from year to year, the same issue was faced by Sylvestre Lacroix almost two centuries earlier. It is necessary to vary the questions that students might communicate with each other, though they may fail the exam, they might pass later. Thus distribution of questions, the variety of topics, or the answers, risks losing the opportunity to compare, with precision, such degradation of problems into exercises is characteristic of mathematics in history. For example, describing the preparations for the Cambridge Mathematical Tripos in the 19th century, many families of the then standard problems had originally taxed the abilities of the greatest mathematicians of the 18th century. List of unsolved problems in mathematics Problem solving Mathematical game List of mathematical concepts named after places Collection of Math Word Problems
Mathematical problem
–
'Suppose you walk past a barber's shop one day, and see a sign that says: "Do you shave yourself? If not, please come in and I'll shave you! I shave anyone who does not shave himself, and no one else". So the question is: "Who shaves the barber?"' —the barber paradox
118.
Numerical analysis
–
Numerical analysis is the study of algorithms that use numerical approximation for the problems of mathematical analysis. Being able to compute the sides of a triangle is important, for instance, in astronomy, carpentry. Numerical analysis continues this tradition of practical mathematical calculations. Much like the Babylonian approximation of the root of 2, modern numerical analysis does not seek exact answers. Instead, much of numerical analysis is concerned with obtaining approximate solutions while maintaining reasonable bounds on errors, before the advent of modern computers numerical methods often depended on hand interpolation in large printed tables. Since the mid 20th century, computers calculate the required functions instead and these same interpolation formulas nevertheless continue to be used as part of the software algorithms for solving differential equations. Computing the trajectory of a spacecraft requires the accurate numerical solution of a system of differential equations. Car companies can improve the safety of their vehicles by using computer simulations of car crashes. Such simulations essentially consist of solving differential equations numerically. Hedge funds use tools from all fields of analysis to attempt to calculate the value of stocks. Airlines use sophisticated optimization algorithms to decide ticket prices, airplane and crew assignments, historically, such algorithms were developed within the overlapping field of operations research. Insurance companies use programs for actuarial analysis. The rest of this section outlines several important themes of numerical analysis, the field of numerical analysis predates the invention of modern computers by many centuries. Linear interpolation was already in use more than 2000 years ago, to facilitate computations by hand, large books were produced with formulas and tables of data such as interpolation points and function coefficients. The function values are no very useful when a computer is available. The mechanical calculator was developed as a tool for hand computation. These calculators evolved into electronic computers in the 1940s, and it was found that these computers were also useful for administrative purposes. But the invention of the computer also influenced the field of analysis, since now longer
Numerical analysis
–
Babylonian clay tablet YBC 7289 (c. 1800–1600 BC) with annotations. The approximation of the square root of 2 is four sexagesimal figures, which is about six decimal figures. 1 + 24/60 + 51/60 2 + 10/60 3 = 1.41421296...
Numerical analysis
–
Direct method
Numerical analysis
119.
Approximation theory
–
In mathematics, approximation theory is concerned with how functions can best be approximated with simpler functions, and with quantitatively characterizing the errors introduced thereby. Note that what is meant by best and simpler will depend on the application, a closely related topic is the approximation of functions by generalized Fourier series, that is, approximations based upon summation of a series of terms based upon orthogonal polynomials. This is typically done with polynomial or rational approximations, the objective is to make the approximation as close as possible to the actual function, typically with an accuracy close to that of the underlying computers floating point arithmetic. This is accomplished by using a polynomial of degree, and/or narrowing the domain over which the polynomial has to approximate the function. Narrowing the domain can often be done through the use of various addition or scaling formulas for the function being approximated, modern mathematical libraries often reduce the domain into many tiny segments and use a low-degree polynomial for each segment. Once the domain and degree of the polynomial are chosen, the polynomial itself is chosen in such a way as to minimize the worst-case error. That is, the goal is to minimize the value of ∣ P − f ∣, where P is the approximating polynomial, f is the actual function. It is seen that an Nth-degree polynomial can interpolate N+1 points in a curve, such a polynomial is always optimal. It is possible to make contrived functions f for which no such polynomial exists, for example, the graphs shown to the right show the error in approximating log and exp for N =4. The red curves, for the polynomial, are level. Note that, in case, the number of extrema is N+2. Two of the extrema are at the end points of the interval, at the left, the red graph to the right shows what this error function might look like for N =4. Suppose Q is another N-degree polynomial that is an approximation to f than P. In particular, Q is closer to f than P for each value xi where an extreme of P−f occurs, but − reduces to P − Q which is a polynomial of degree N. This function changes sign at least N+1 times so, by the Intermediate value theorem, it has N+1 zeroes, one can obtain polynomials very close to the optimal one by expanding the given function in terms of Chebyshev polynomials and then cutting off the expansion at the desired degree. This is similar to the Fourier analysis of the function, using the Chebyshev polynomials instead of the trigonometric functions. If one calculates the coefficients in the Chebyshev expansion for a function, f ∼ ∑ i =0 ∞ c i T i and then cuts off the series after the T N term and that is, the first term after the cutoff dominates all later terms. The same is true if the expansion is in terms of Chebyshev polynomials, if a Chebyshev expansion is cut off after T N, the error will take a form close to a multiple of T N +1
Approximation theory
–
Error between optimal polynomial and log(x) (red), and Chebyshev approximation and log(x) (blue) over the interval [2, 4]. Vertical divisions are 10 −5. Maximum error for the optimal polynomial is 6.07 x 10 −5.
120.
Discretization
–
In mathematics, discretization concerns the process of transferring continuous functions, models, and equations into discrete counterparts. This process is carried out as a first step toward making them suitable for numerical evaluation and implementation on digital computers. Dichotomization is the case of discretization in which the number of discrete classes is 2. Discretization is also related to mathematics, and is an important component of granular computing. In this context, discretization may also refer to modification of variable or category granularity, whenever continuous data is discretized, there is always some amount of discretization error. The goal is to reduce the amount to a level considered negligible for the modeling purposes at hand, discretization is not always distinguished from quantization in any clearly defined way. The two terms share a semantic field, the same is true of discretization error and quantization error. Mathematical methods relating to discretization include the Euler–Maruyama method and the zero-order hold, discretization is also concerned with the transformation of continuous differential equations into discrete difference equations, suitable for numerical computing. It can, however, be computed by first constructing a matrix, the discretized process noise is then evaluated by multiplying the transpose of the lower-right partition of G with the upper-right partition of G, Q d = T. Now we want to discretise the above expression and we assume that u is constant during each timestep. Exact discretization may sometimes be due to the heavy matrix exponential and integral operations involved. It is much easier to calculate an approximate model, based on that for small timesteps e A T ≈ I + A T. The approximate solution then becomes, x ≈ x + T B u Other possible approximations are e A T ≈ −1 and e A T ≈ −1, each of them have different stability properties. The last one is known as the transform, or Tustin transform. In statistics and machine learning, discretization refers to the process of converting continuous features or variables to discretized or nominal features and this can be useful when creating probability mass functions. Discrete space Time-scale calculus Discrete event simulation Stochastic simulation Finite volume method for unsteady flow Discrete time, introduction to random signals and applied Kalman filtering. Philadelphia, PA, USA, Saunders College Publishing, computing integrals involving the matrix exponential. Digital control and estimation, a unified approach
Discretization
–
A solution to a discretized partial differential equation, obtained with the finite element method.
121.
Mathematical physics
–
Mathematical physics refers to development of mathematical methods for application to problems in physics. It is a branch of applied mathematics, but deals with physical problems, there are several distinct branches of mathematical physics, and these roughly correspond to particular historical periods. The rigorous, abstract and advanced re-formulation of Newtonian mechanics adopting the Lagrangian mechanics, both formulations are embodied in analytical mechanics. These approaches and ideas can be and, in fact, have extended to other areas of physics as statistical mechanics, continuum mechanics, classical field theory. Moreover, they have provided several examples and basic ideas in differential geometry, the theory of partial differential equations are perhaps most closely associated with mathematical physics. These were developed intensively from the half of the eighteenth century until the 1930s. Physical applications of these developments include hydrodynamics, celestial mechanics, continuum mechanics, elasticity theory, acoustics, thermodynamics, electricity, magnetism, and aerodynamics. The theory of atomic spectra developed almost concurrently with the fields of linear algebra. Nonrelativistic quantum mechanics includes Schrödinger operators, and it has connections to atomic, Quantum information theory is another subspecialty. The special and general theories of relativity require a different type of mathematics. This was group theory, which played an important role in quantum field theory and differential geometry. This was, however, gradually supplemented by topology and functional analysis in the description of cosmological as well as quantum field theory phenomena. In this area both homological algebra and category theory are important nowadays, statistical mechanics forms a separate field, which includes the theory of phase transitions. It relies upon the Hamiltonian mechanics and it is related with the more mathematical ergodic theory. There are increasing interactions between combinatorics and physics, in statistical physics. The usage of the mathematical physics is sometimes idiosyncratic. Certain parts of mathematics that arose from the development of physics are not, in fact, considered parts of mathematical physics. The term mathematical physics is sometimes used to research aimed at studying and solving problems inspired by physics or thought experiments within a mathematically rigorous framework
Mathematical physics
–
An example of mathematical physics: solutions of Schrödinger's equation for quantum harmonic oscillators (left) with their amplitudes (right).
122.
Mathematical biology
–
Mathematical and theoretical biology is an interdisciplinary scientific research field with a range of applications. The field is sometimes called mathematical biology or biomathematics to stress the mathematical side, Mathematical biology aims at the mathematical representation, treatment and modeling of biological processes, using techniques and tools of applied mathematics. It has both theoretical and practical applications in biological, biomedical and biotechnology research, describing systems in a quantitative manner means their behavior can be better simulated, and hence properties can be predicted that might not be evident to the experimenter. Mathematical biology employs many components of mathematics, and has contributed to the development of new techniques, applying mathematics to biology has a long history. One founding text is considered to be On Growth and Form by DArcy Thompson, however, only recently has there been an explosion of interest in the field. Ecology and evolutionary biology have traditionally been the dominant fields of mathematical biology, Evolutionary biology has been the subject of extensive mathematical theorizing. The traditional approach in this area, which includes complications from genetics, is population genetics, when infinitesimal effects at a large number of gene loci are considered, together with the assumption of linkage equilibrium or quasi-linkage equilibrium, one derives quantitative genetics. Ronald Fisher made fundamental advances in statistics, such as analysis of variance, another important branch of population genetics that led to the extensive development of coalescent theory is phylogenetics. Many population genetics models assume that population sizes are constant, variable population sizes, often in the absence of genetic variation, are treated by the field of population dynamics. The Lotka–Volterra predator-prey equations are another famous example, population dynamics overlap with another active area of research in mathematical biology, mathematical epidemiology, the study of infectious disease affecting populations. Various models of the spread of infections have been proposed and analyzed, in evolutionary game theory, developed first by John Maynard Smith and George R. Price, selection acts directly on inherited phenotypes, without genetic complications. This approach has been refined to produce the field of adaptive dynamics. This published report also includes 390 references to peer-reviewed articles by a number of authors. Modeling cell and molecular biology This area has received a boost due to the importance of molecular biology. It was introduced by Anthony Bartholomay, and its applications were developed in mathematical biology, a model of a biological system is converted into a system of equations, although the word model is often used synonymously with the system of corresponding equations. The solution of the equations, by either analytical or numerical means, there are many different types of equations and the type of behavior that can occur is dependent on both the model and the equations used. The model often makes assumptions about the system, the equations may also make assumptions about the nature of what may occur. Computer with significant recent evolution in performance acceraretes the model based on various formulas
Mathematical biology
–
Contents
123.
Nobel Prize
–
The Nobel Prize is a set of annual international awards bestowed in a number of categories by Swedish and Norwegian institutions in recognition of academic, cultural, or scientific advances. The will of the Swedish inventor Alfred Nobel established the prizes in 1895, the prizes in Chemistry, Literature, Peace, Physics, and Physiology or Medicine were first awarded in 1901. Medals made before 1980 were struck in 23 carat gold, between 1901 and 2016, the Nobel Prizes and the Prize in Economic Sciences were awarded 579 times to 911 people and organisations. With some receiving the Nobel Prize more than once, this makes a total of 23 organisations, the prize ceremonies take place annually in Stockholm, Sweden. Each recipient, or laureate, receives a medal, a diploma. The Nobel Prize is widely regarded as the most prestigious award available in the fields of literature, medicine, physics, chemistry, peace, and economics. The prize is not awarded posthumously, however, if a person is awarded a prize and dies before receiving it, though the average number of laureates per prize increased substantially during the 20th century, a prize may not be shared among more than three people. Alfred Nobel was born on 21 October 1833 in Stockholm, Sweden and he was a chemist, engineer, and inventor. In 1894, Nobel purchased the Bofors iron and steel mill and this invention was a precursor to many smokeless military explosives, especially the British smokeless powder cordite. As a consequence of his patent claims, Nobel was eventually involved in a patent infringement lawsuit over cordite, Nobel amassed a fortune during his lifetime, with most of his wealth from his 355 inventions, of which dynamite is the most famous. In 1888, Nobel was astonished to read his own obituary, titled The merchant of death is dead, as it was Alfreds brother Ludvig who had died, the obituary was eight years premature. The article disconcerted Nobel and made him apprehensive about how he would be remembered and this inspired him to change his will. On 10 December 1896, Alfred Nobel died in his villa in San Remo, Italy, Nobel wrote several wills during his lifetime. He composed the last over a year before he died, signing it at the Swedish–Norwegian Club in Paris on 27 November 1895, Nobel bequeathed 94% of his total assets,31 million SEK, to establish the five Nobel Prizes. Because of skepticism surrounding the will, it was not until 26 April 1897 that it was approved by the Storting in Norway. The executors of Nobels will, Ragnar Sohlman and Rudolf Lilljequist, formed the Nobel Foundation to take care of Nobels fortune, Nobels instructions named a Norwegian Nobel Committee to award the Peace Prize, the members of whom were appointed shortly after the will was approved in April 1897. Soon thereafter, the other prize-awarding organisations were designated or established and these were Karolinska Institutet on 7 June, the Swedish Academy on 9 June, and the Royal Swedish Academy of Sciences on 11 June. The Nobel Foundation reached an agreement on guidelines for how the prizes should be awarded, and, in 1900, in 1905, the personal union between Sweden and Norway was dissolved
Nobel Prize
–
Alfred Nobel had the unpleasant surprise of reading his own obituary, which was titled The merchant of death is dead, in a French newspaper.
Nobel Prize
–
The Nobel Prize
Nobel Prize
–
Alfred Nobel's will stated that 94% of his total assets should be used to establish the Nobel Prizes.
Nobel Prize
–
Wilhelm Röntgen received the first Physics Prize for his discovery of X-rays.
124.
Lists of mathematics topics
–
This article itemizes the various lists of mathematics topics. Some of these lists link to hundreds of articles, some only to a few. The template to the right links to alphabetical lists of all mathematical articles. This article brings together the same content organized in a better suited for browsing. The purpose of this list is not similar to that of the Mathematics Subject Classification formulated by the American Mathematical Society, many mathematics journals ask authors of research papers and expository articles to list subject codes from the Mathematics Subject Classification in their papers. The subject codes so listed are used by the two major reviewing databases, Mathematical Reviews and Zentralblatt MATH and these lists include topics typically taught in secondary education or in the first year of university. As a rough guide this list is divided into pure and applied sections although in reality these branches are overlapping, algebra includes the study of algebraic structures, which are sets and operations defined on these sets satisfying certain axioms. The field of algebra is further divided according to structure is studied, for instance. Geometry is initially the study of figures like circles and cubes. Topology developed from geometry, it looks at those properties that do not change even when the figures are deformed by stretching and bending, outline of combinatorics List of graph theory topics Glossary of graph theory Logic is the foundation which underlies mathematical logic and the rest of mathematics. It tries to formalize valid reasoning, in particular, it attempts to define what constitutes a proof. One of the concepts in number theory is that of the prime number. In a dynamical system, a fixed rule describes the dependence of a point in a geometrical space. The mathematical models used to describe the swinging of a clock pendulum, historically, information theory was developed to find fundamental limits on compressing and reliably communicating data. Signal processing is the analysis, interpretation, and manipulation of signals, signals of interest include sound, images, biological signals such as ECG, radar signals, and many others. Processing of such signals includes filtering, storage and reconstruction, separation of information from noise, compression, the related field of mathematical statistics develops statistical theory with mathematics. Statistics, the science concerned with collecting and analyzing data, is an autonomous discipline and it has applications in a variety of fields, including economics, evolutionary biology, political science, social psychology and military strategy. The following pages list the integrals of many different functions
Lists of mathematics topics
–
Ray tracing is a process based on computational mathematics.
Lists of mathematics topics
–
Fourier series approximation of square wave in five steps.
125.
National Museum of Mathematics
–
The National Museum of Mathematics or MoMath is a museum dedicated to mathematics in Manhattan, New York City. It opened on December 15,2012 and it is located at 11 East 26th Street between Fifth and Madison Avenues, across from Madison Square Park in the NoMad neighborhood. It is the museum dedicated to mathematics in North America. The mission of the museum is to enhance understanding and perception of mathematics. In 2006 the Goudreau Museum on Long Island, at the time the museum in the United States dedicated to mathematics. In response, a group led by founder and current MoMath president Glen Whitney and they received a charter from the New York State Department of Education in 2009, and raised over 22 million dollars in under four years. With this funding, a 19,000 square feet space was leased in the Goddard Building at 11-13 East 26th Street, Math Midway is a traveling exhibition of math-based interactive displays. After making its debut at the World Science Festival in 2009, Math Midway traveled the country, the Midways schedule included stops in New York, Pennsylvania, Texas, California, New Jersey, Ohio, Maryland, Florida, Indiana, and Oregon. In 2016, the Math Midway exhibit was sold to the Science Centre Singapore, Math Midway 2 Go is a spinoff of Math Midway. MM2GO includes six of the most popular Math Midway Exhibits, mM2GO began traveling to science festivals, schools, community centers, and libraries in the autumn of 2012. Math Encounters is a speaker series presented by the Museum of Math. The lectures initially took place at Baruch College in Manhattan on the first Wednesday of each month, every month a different mathematician is invited to deliver a lecture. Lecturers have included Googles Director of Research Peter Norvig, journalist Paul Hoffman, examples of topics are The Geometry of Origami, The Patterns of Juggling, and Mathematical Morsels from The Simpsons and Futurama. The lectures are meant to be accessible and engaging for high school students and adults, the first lecture occurred on March 3,2011. Twenty unique lectures had been delivered as of December 2012, family Fridays began in April 2014 and occurs once a month. MoMath and Time Warner Cable launched the initiative to provide free mathematical opportunities to low-income families in the form of an event series with new activities, in 2017, the sponsorship was taken over by Two Sigma. Each sculpture can be disassembled into small interlocking pieces, eventually revealing a piece of jewelry or other surprise
National Museum of Mathematics
–
Entrance (2013)
126.
Relationship between mathematics and physics
–
The relationship between mathematics and physics has been a subject of study of philosophers, mathematicians and physicists since Antiquity, and more recently also by historians and educators. In his work Physics, one of the topics treated by Aristotle is about how the study carried out by mathematicians differs from that carried out by physicists. Before giving a mathematical proof for the formula for the volume of a sphere, from the seventeenth century, many of the most important advances in mathematics appeared motivated by the study of physics, and this continued in the following centuries. During this period there was little distinction between physics and mathematics, as an example, Newton regarded geometry as a branch of mechanics, as time progressed, increasingly sophisticated mathematics started to be used in physics. The current situation is that the mathematical knowledge used in physics is becoming increasingly sophisticated, how can it be that mathematics, being after all a product of human thought which is independent of experience, is so admirably appropriate to the objects of reality. —Albert Einstein, in Geometry and Experience, clearly delineate mathematics and physics, For some results or discoveries, it is difficult to say to which area they belong, to the mathematics or to physics. What is the geometry of physical space, what is the origin of the axioms of mathematics. How does the already existing mathematics influence in the creation and development of physical theories, is arithmetic a priori or synthetic. What is essentially different between doing an experiment to see the result and making a mathematical calculation to see the result. Do Gödels incompleteness theorems imply that physical theories will always be incomplete, in recent times the two disciplines have most often been taught separately, despite all the interrelations between physics and mathematics. Mathematics and physics, mother and daughter or sisters, atiyah, M. Dijkgraaf, R. Hitchin, N. Geometry and physics. Philosophical Transactions of the Royal Society A, Mathematical, Physical, boniolo, Giovanni, Budinich, Paolo, Trobok, Majda, eds. The Role of Mathematics in Physical Sciences, Interdisciplinary and Philosophical Aspects, the Relation between Mathematics and Physics. The Relation of Mathematics to Physics, university of Alberta Mathematical Sciences Society. The Reasonable Effectiveness of Mathematics in the Physical Sciences, the Intimate Relation between Mathematics and Physics. Science and Its Times, Understanding the Social Significance of Scientific Discovery, on the Future of Mathematics/Physics Interaction. Proceedings of the International Conference of Mathematicians, the Unreasonable Effectiveness of Mathematics in the Natural Sciences. Communications on Pure and Applied Mathematics, gregory W. Moore – Physical Mathematics and the Future IOP Institute of Physics – Mathematical Physics, What is it and why do we need it
Relationship between mathematics and physics
–
A cycloidal pendulum is isochronous, a fact discovered and proved by Christiaan Huygens under certain mathematical assumptions.
127.
International Standard Book Number
–
The International Standard Book Number is a unique numeric commercial book identifier. An ISBN is assigned to each edition and variation of a book, for example, an e-book, a paperback and a hardcover edition of the same book would each have a different ISBN. The ISBN is 13 digits long if assigned on or after 1 January 2007, the method of assigning an ISBN is nation-based and varies from country to country, often depending on how large the publishing industry is within a country. The initial ISBN configuration of recognition was generated in 1967 based upon the 9-digit Standard Book Numbering created in 1966, the 10-digit ISBN format was developed by the International Organization for Standardization and was published in 1970 as international standard ISO2108. Occasionally, a book may appear without a printed ISBN if it is printed privately or the author does not follow the usual ISBN procedure, however, this can be rectified later. Another identifier, the International Standard Serial Number, identifies periodical publications such as magazines, the ISBN configuration of recognition was generated in 1967 in the United Kingdom by David Whitaker and in 1968 in the US by Emery Koltay. The 10-digit ISBN format was developed by the International Organization for Standardization and was published in 1970 as international standard ISO2108, the United Kingdom continued to use the 9-digit SBN code until 1974. The ISO on-line facility only refers back to 1978, an SBN may be converted to an ISBN by prefixing the digit 0. For example, the edition of Mr. J. G. Reeder Returns, published by Hodder in 1965, has SBN340013818 -340 indicating the publisher,01381 their serial number. This can be converted to ISBN 0-340-01381-8, the check digit does not need to be re-calculated, since 1 January 2007, ISBNs have contained 13 digits, a format that is compatible with Bookland European Article Number EAN-13s. An ISBN is assigned to each edition and variation of a book, for example, an ebook, a paperback, and a hardcover edition of the same book would each have a different ISBN. The ISBN is 13 digits long if assigned on or after 1 January 2007, a 13-digit ISBN can be separated into its parts, and when this is done it is customary to separate the parts with hyphens or spaces. Separating the parts of a 10-digit ISBN is also done with either hyphens or spaces, figuring out how to correctly separate a given ISBN number is complicated, because most of the parts do not use a fixed number of digits. ISBN issuance is country-specific, in that ISBNs are issued by the ISBN registration agency that is responsible for country or territory regardless of the publication language. Some ISBN registration agencies are based in national libraries or within ministries of culture, in other cases, the ISBN registration service is provided by organisations such as bibliographic data providers that are not government funded. In Canada, ISBNs are issued at no cost with the purpose of encouraging Canadian culture. In the United Kingdom, United States, and some countries, where the service is provided by non-government-funded organisations. Australia, ISBNs are issued by the library services agency Thorpe-Bowker
International Standard Book Number
–
A 13-digit ISBN, 978-3-16-148410-0, as represented by an EAN-13 bar code
128.
Science (journal)
–
Science, also widely referred to as Science Magazine, is the peer-reviewed academic journal of the American Association for the Advancement of Science and one of the worlds top academic journals. It was first published in 1880, is circulated weekly and has a print subscriber base of around 130,000. Because institutional subscriptions and online access serve an audience, its estimated readership is 570,400 people. Unlike most scientific journals, which focus on a field, Science. According to the Journal Citation Reports, Sciences 2015 impact factor was 34.661, although it is the journal of the AAAS, membership in the AAAS is not required to publish in Science. Papers are accepted from authors around the world, competition to publish in Science is very intense, as an article published in such a highly cited journal can lead to attention and career advancement for the authors. Fewer than 7% of articles submitted are accepted for publication, Science is based in Washington, D. C. United States, with an office in Cambridge, England. Science was founded by New York journalist John Michels in 1880 with financial support from Thomas Edison, however, the journal never gained enough subscribers to succeed and ended publication in March 1882. Entomologist Samuel H. Scudder resurrected the journal one year later and had some success while covering the meetings of prominent American scientific societies, however, by 1894, Science was again in financial difficulty and was sold to psychologist James McKeen Cattell for $500. In an agreement worked out by Cattell and AAAS secretary Leland O. Howard, after Cattell died in 1944, the ownership of the journal was transferred to the AAAS. After Cattells death in 1944, the journal lacked a consistent editorial presence until Graham DuShane became editor in 1956. In 1958, under DuShanes leadership, Science absorbed The Scientific Monthly, physicist Philip Abelson, a co-discoverer of neptunium, served as editor from 1962 to 1984. Under Abelson the efficiency of the process was improved and the publication practices were brought up to date. During this time, papers on the Apollo program missions and some of the earliest reports on AIDS were published, biochemist Daniel E. Koshland, Jr. served as editor from 1985 until 1995. From 1995 until 2000, neuroscientist Floyd E. Bloom held that position, biologist Donald Kennedy became the editor of Science in 2000. Biochemist Bruce Alberts took his place in March 2008, geophysicist Marcia McNutt became editor-in-chief in June 2013. During her tenure the family of journals expanded to include Science Robotics and Science Immunology, jeremy M. Berg became editor-in-chief on July 1,2016
Science (journal)
–
Issue from February–June 1883
Science (journal)
–
Science
129.
Keith Devlin
–
Keith J. Devlin is a British mathematician and popular science writer. Since 1987 he has lived in the United States, Devlin earned a BSc in Mathematics at Kings College London in 1968, and a PhD in Mathematics at the University of Bristol in 1971 under the supervision of Frederick Rowbottom. He is a commentator on National Public Radios Weekend Edition Saturday, as of 2012, he is the author of 34 books and over 80 research articles. Several of his books are aimed at an audience of the general public, the Joy of Sets, Fundamentals of Contemporary Set Theory. Goodbye, Descartes, the End of Logic and the Search for a New Cosmology of the Mind, john Wiley & Sons, Inc.1997. The Language of Mathematics, Making the Invisible Visible, the Math Gene, How Mathematical Thinking Evolved and Why Numbers Are Like Gossip. The Millennium Problems, the Seven Greatest Unsolved Mathematical Puzzles of Our Time, the Math Instinct, Why Youre a Mathematical Genius. The Numbers Behind NUMB3RS, Solving Crime with Mathematics, with coauthor Gary Lorden The Unfinished Game, Pascal, Fermat, and the Seventeenth-Century Letter that Made the World Modern. The Man of Numbers, Fibonaccis Arithmetic Revolution, Mathematics Education for a New Era, Video Games as a Medium for Learning. Official website including his curriculum vitae Devlins Angle — column at the Mathematical Association of America
Keith Devlin
–
Keith Devlin (2011)
130.
BBC Radio 4
–
BBC Radio 4 is a radio station owned and operated by the British Broadcasting Corporation that broadcasts a wide variety of spoken-word programmes including news, drama, comedy, science and history. It replaced the BBC Home Service in 1967, the station controller is Gwyneth Williams, and the station is part of BBC Radio and the BBC Radio department. The station is broadcast from the BBCs headquarters at Broadcasting House and it is also available through Freeview, Sky, Virgin Media and on the Internet. It is notable for its news bulletins and programmes such as Today and The World at One, BBC Radio 4 is the second most popular British domestic radio station by total hours, after Radio 2 – and the most popular in London and the South of England. It recorded its highest audience, of 11 million listeners, in May 2011 and was UK Radio Station of the Year at the 2003,2004 and 2008 Sony Radio Academy Awards and it also won a Peabody Award in 2002 for File On 4, Export Controls. Costing £71.4 million, it is the BBCs most expensive national radio network and is considered by many to be its flagship. There is no comparable British commercial network, Channel 4 abandoned plans to launch its own speech-based digital radio station in October 2008 as part of a £100m cost cutting review, in 2010 Gwyneth Williams replaced Mark Damazer as Radio 4 controller. Damazer became Master of St Peters College, Oxford, music and sport are the only fields that largely fall outside the stations remit. It broadcasts occasional concerts, and documentaries related to forms of both popular and classical music, and the long-running music-based Desert Island Discs. As a result, for around 70 days a year listeners have to rely on FM broadcasts or increasingly DAB for mainstream Radio 4 broadcasts – the number relying solely on long wave is now a small minority. The cricket broadcasts take precedence over on-the-hour news bulletins, but not the Shipping Forecast, as well as news and drama, the station has a strong reputation for comedy, including experimental and alternative comedy, many successful comedians and comedy shows first appearing on the station. The BBC Home Service was the predecessor of Radio 4 and broadcast between 1939 and 1967 and it had regional variations and was broadcast on medium wave with a network of VHF FM transmitters being added from 1955. Radio 4 replaced it on 30 September 1967, when the BBC renamed many of its radio stations. For a time during the 1970s Radio 4 carried regional news bulletins Monday to Saturday and these were broadcast twice at breakfast, at lunchtime and an evening bulletin was aired at 5. 55pm. There were also programme variations for the parts of England not served by BBC Local Radio stations and these included Roundabout East Anglia, a VHF opt-out of the Today programme broadcast from BBC Easts studios in Norwich each weekday from 6.45 am to 8.45 am. Roundabout East Anglia came to an end in 1980, when local services were introduced to East Anglia with the launch of BBC Radio Norfolk. All regional news bulletins broadcast from BBC regional news bases around England ended in August 1980 apart from in the south west, in September 1991 it was decided that the main Radio 4 service would be on FM as coverage had extended to cover almost all of the UK. Opt-outs were transferred to long wave, currently Test Match Special, extra shipping forecasts, The Daily Service, long wave very occasionally opts out at other times, such as to broadcast special services, the most recent being when Pope Benedict XVI visited Britain in 2010
BBC Radio 4
–
Logo of Radio 4 until 2007
BBC Radio 4
–
BBC Radio 4
131.
Henry Liddell
–
Lewis Carroll wrote Alices Adventures in Wonderland for Henry Liddells daughter Alice. Liddell received his education at Charterhouse and Christ Church, Oxford and he gained a double first degree in 1833, then became a college tutor, and was ordained in 1838. Liddell was Headmaster of Westminster School from 1846 to 1855, meanwhile, his life work, the great lexicon, which he and Robert Scott began as early as 1834, had made good progress, and the first edition of Liddell and Scotts Lexicon appeared in 1843. It immediately became the standard Greek–English dictionary, with the 8th edition published in 1897, as Headmaster of Westminster Liddell enjoyed a period of great success, followed by trouble due to the outbreak of fever and cholera in the school. In 1855 he accepted the deanery of Christ Church, Oxford, in the same year he brought out his History of Ancient Rome and took a very active part in the first Oxford University Commission. His tall figure, fine presence and aristocratic mien were for years associated with all that was characteristic of Oxford life. In 1859 Liddell welcomed the then Prince of Wales when he matriculated at Christ Church, while he was Dean of Christ Church, he arranged for the building of a new choir school and classrooms for the staff and pupils of Christ Church Cathedral School on its present site. Before then the school was housed within Christ Church itself, in July 1846, Liddell married Miss Lorina Reeve, with whom he had several children, including Alice Liddell of Lewis Carroll fame. In conjunction with Sir Henry Acland, Liddell did much to encourage the study of art at Oxford, in 1891, owing to advancing years, he resigned the deanery. The last years of his life were spent at Ascot, where he died on 18 January 1898, Two roads in Ascot, Liddell Way and Carroll Crescent honour the relationship between Henry Liddell and Lewis Carroll. Liddell was an Oxford character in later years and he figures in contemporary undergraduate doggerel, I am the Dean, this Mrs Liddell. She plays first, I, second fiddle and she is the Broad, I am the High – We are the University. The Victorian journalist, George W. E and his father was Henry Liddell, Rector of Easington, the younger son of Sir Henry Liddell, 5th Baronet and the former Elizabeth Steele. His fathers elder brother, Sir Thomas Liddell, 6th Baronet, was raised to the Peerage as Baron Ravensworth in 1821 and his mother was the former Charlotte Lyon, a daughter of Thomas Lyon and the former Mary Wren. On 2 July 1846, Henry married Lorina Reeve and they were parents of ten children, Edward Harry Liddell. Alice Pleasance Liddell, for whom the story of the childrens classic Alices Adventures in Wonderland was originally told, rhoda Caroline Anne Liddell, she was invested as an Officer, Order of the British Empire in 1920. Albert Edward Arthur Liddell, he died in infancy, violet Constance Liddell, she was invested as a Member, Order of the British Empire in 1920. Sir Frederick Francis Liddell, First Parliamentary Counsel and Ecclesiastical Commissioner, lionel Charles Liddell, he was British Consul to Lyons and Copenhagen
Henry Liddell
–
Henry Liddell, in an 1891 portrait by Sir Hubert von Herkomer.
Henry Liddell
–
Caricature of Rev. Henry Liddell by 'Ape' from Vanity Fair (1875).
132.
Oxford English Dictionary
–
The Oxford English Dictionary is a descriptive dictionary of the English language, published by the Oxford University Press. The second edition came to 21,728 pages in 20 volumes, in 1895, the title The Oxford English Dictionary was first used unofficially on the covers of the series, and in 1928 the full dictionary was republished in ten bound volumes. In 1933, the title The Oxford English Dictionary fully replaced the name in all occurrences in its reprinting as twelve volumes with a one-volume supplement. More supplements came over the years until 1989, when the edition was published. Since 2000, an edition of the dictionary has been underway. The first electronic version of the dictionary was available in 1988. The online version has been available since 2000, and as of April 2014 was receiving two million hits per month. The third edition of the dictionary will probably appear in electronic form, Nigel Portwood, chief executive of Oxford University Press. As a historical dictionary, the Oxford English Dictionary explains words by showing their development rather than merely their present-day usages, therefore, it shows definitions in the order that the sense of the word began being used, including word meanings which are no longer used. The format of the OEDs entries has influenced numerous other historical lexicography projects and this influenced later volumes of this and other lexicographical works. As of 30 November 2005, the Oxford English Dictionary contained approximately 301,100 main entries, the dictionarys latest, complete print edition was printed in 20 volumes, comprising 291,500 entries in 21,730 pages. The longest entry in the OED2 was for the verb set, as entries began to be revised for the OED3 in sequence starting from M, the longest entry became make in 2000, then put in 2007, then run in 2011. Despite its impressive size, the OED is neither the worlds largest nor the earliest exhaustive dictionary of a language, the Dutch dictionary Woordenboek der Nederlandsche Taal is the worlds largest dictionary, has similar aims to the OED and took twice as long to complete. Another earlier large dictionary is the Grimm brothers dictionary of the German language, begun in 1838, the official dictionary of Spanish is the Diccionario de la lengua española, and its first edition was published in 1780. The Kangxi dictionary of Chinese was published in 1716, trench suggested that a new, truly comprehensive dictionary was needed. On 7 January 1858, the Society formally adopted the idea of a new dictionary. Volunteer readers would be assigned particular books, copying passages illustrating word usage onto quotation slips, later the same year, the Society agreed to the project in principle, with the title A New English Dictionary on Historical Principles. He withdrew and Herbert Coleridge became the first editor, on 12 May 1860, Coleridges dictionary plan was published and research was started
Oxford English Dictionary
–
Seven of the twenty volumes of the printed version of the second edition of the OED
Oxford English Dictionary
–
Frederick Furnivall, 1825–1910
Oxford English Dictionary
–
James Murray in the Scriptorium at Banbury Road
Oxford English Dictionary
–
The 78 Banbury Road, Oxford, house, erstwhile residence of James Murray, Editor of the Oxford English Dictionary
133.
Oxford University Press
–
Oxford University Press is the largest university press in the world, and the second oldest after Cambridge University Press. It is a department of the University of Oxford and is governed by a group of 15 academics appointed by the known as the delegates of the press. They are headed by the secretary to the delegates, who serves as OUPs chief executive, Oxford University has used a similar system to oversee OUP since the 17th century. The university became involved in the print trade around 1480, and grew into a printer of Bibles, prayer books. OUP took on the project became the Oxford English Dictionary in the late 19th century. Moves into international markets led to OUP opening its own offices outside the United Kingdom, by contracting out its printing and binding operations, the modern OUP publishes some 6,000 new titles around the world each year. OUP was first exempted from United States corporation tax in 1972, as a department of a charity, OUP is exempt from income tax and corporate tax in most countries, but may pay sales and other commercial taxes on its products. The OUP today transfers 30% of its surplus to the rest of the university. OUP is the largest university press in the world by the number of publications, publishing more than 6,000 new books every year, the Oxford University Press Museum is located on Great Clarendon Street, Oxford. Visits must be booked in advance and are led by a member of the archive staff, displays include a 19th-century printing press, the OUP buildings, and the printing and history of the Oxford Almanack, Alice in Wonderland and the Oxford English Dictionary. The first printer associated with Oxford University was Theoderic Rood, the first book printed in Oxford, in 1478, an edition of Rufinuss Expositio in symbolum apostolorum, was printed by another, anonymous, printer. Famously, this was mis-dated in Roman numerals as 1468, thus apparently pre-dating Caxton, roods printing included John Ankywylls Compendium totius grammaticae, which set new standards for teaching of Latin grammar. After Rood, printing connected with the university remained sporadic for over half a century, the chancellor, Robert Dudley, 1st Earl of Leicester, pleaded Oxfords case. Some royal assent was obtained, since the printer Joseph Barnes began work, Oxfords chancellor, Archbishop William Laud, consolidated the legal status of the universitys printing in the 1630s. Laud envisaged a unified press of world repute, Oxford would establish it on university property, govern its operations, employ its staff, determine its printed work, and benefit from its proceeds. To that end, he petitioned Charles I for rights that would enable Oxford to compete with the Stationers Company and the Kings Printer and these were brought together in Oxfords Great Charter in 1636, which gave the university the right to print all manner of books. Laud also obtained the privilege from the Crown of printing the King James or Authorized Version of Scripture at Oxford and this privilege created substantial returns in the next 250 years, although initially it was held in abeyance. The Stationers Company was deeply alarmed by the threat to its trade, under this, the Stationers paid an annual rent for the university not to exercise its full printing rights – money Oxford used to purchase new printing equipment for smaller purposes
Oxford University Press
–
Oxford University Press on Walton Street.
Oxford University Press
–
2008 conference booth
134.
Diophantus
–
Diophantus of Alexandria, sometimes called the father of algebra, was an Alexandrian Greek mathematician and the author of a series of books called Arithmetica, many of which are now lost. These texts deal with solving algebraic equations and this led to tremendous advances in number theory, and the study of Diophantine equations and of Diophantine approximations remain important areas of mathematical research. Diophantus coined the term παρισότης to refer to an approximate equality and this term was rendered as adaequalitas in Latin, and became the technique of adequality developed by Pierre de Fermat to find maxima for functions and tangent lines to curves. Diophantus was the first Greek mathematician who recognized fractions as numbers, thus he allowed positive rational numbers for the coefficients, in modern use, Diophantine equations are usually algebraic equations with integer coefficients, for which integer solutions are sought. Diophantus also made advances in mathematical notation, little is known about the life of Diophantus. He lived in Alexandria, Egypt, probably from between AD200 and 214 to 284 or 298, much of our knowledge of the life of Diophantus is derived from a 5th-century Greek anthology of number games and puzzles created by Metrodorus. One of the states, Here lies Diophantus, the wonder behold. Alas, the child of master and sage After attaining half the measure of his fathers life chill fate took him. After consoling his fate by the science of numbers for four years and this puzzle implies that Diophantus age x can be expressed as x = x/6 + x/12 + x/7 +5 + x/2 +4 which gives x a value of 84 years. However, the accuracy of the information cannot be independently confirmed, the Arithmetica is the major work of Diophantus and the most prominent work on algebra in Greek mathematics. It is a collection of problems giving numerical solutions of both determinate and indeterminate equations, of the original thirteen books of which Arithmetica consisted only six have survived, though there are some who believe that four Arab books discovered in 1968 are also by Diophantus. Some Diophantine problems from Arithmetica have been found in Arabic sources and it should be mentioned here that Diophantus never used general methods in his solutions. Hermann Hankel, renowned German mathematician made the following remark regarding Diophantus, “Our author not the slightest trace of a general, comprehensive method is discernible, each problem calls for some special method which refuses to work even for the most closely related problems. The portion of the Greek Arithmetica that survived, however, was, like all ancient Greek texts transmitted to the modern world, copied by. In addition, some portion of the Arithmetica probably survived in the Arab tradition. ”Arithmetica was first translated from Greek into Latin by Bombelli in 1570, however, Bombelli borrowed many of the problems for his own book Algebra. The editio princeps of Arithmetica was published in 1575 by Xylander, the best known Latin translation of Arithmetica was made by Bachet in 1621 and became the first Latin edition that was widely available. Pierre de Fermat owned a copy, studied it, and made notes in the margins. I have a marvelous proof of this proposition which this margin is too narrow to contain. ”Fermats proof was never found
Diophantus
–
Title page of the 1621 edition of Diophantus' Arithmetica, translated into Latin by Claude Gaspard Bachet de Méziriac.
Diophantus
–
Problem II.8 in the Arithmetica (edition of 1670), annotated with Fermat's comment which became Fermat's Last Theorem.
135.
Franciscus Vieta
–
He was a lawyer by trade, and served as a privy councillor to both Henry III and Henry IV. Viete was born at Fontenay-le-Comte in present-day Vendée and his grandfather was a merchant from La Rochelle. His father, Etienne Viète, was an attorney in Fontenay-le-Comte and his mother was the aunt of Barnabé Brisson, a magistrate and the first president of parliament during the ascendancy of the Catholic League of France. Viete went to a Franciscan school and in 1558 studied law at Poitiers, a year later, he began his career as an attorney in his native town. From the outset, he was entrusted with some cases, including the settlement of rent in Poitou for the widow of King Francis I of France and looking after the interests of Mary. The same year, at Parc-Soubise, in the commune of Mouchamps in present-day Vendée, Viete became the tutor of Catherine de Parthenay and he taught her science and mathematics and wrote for her numerous treatise on astronomy, geography and trigonometry, some of which have survived. In these treatise, Viete used decimal numbers and he noted the elliptic orbit of the planets, forty years before Kepler. John V de Parthenay presented him to King Charles IX of France, Viete wrote a genealogy of the Parthenay family and following the death of Jean V de Parthenay-Soubise in 1566, his biography. In 1570, he refused to represent the Soubise ladies in their infamous lawsuit against the Baron De Quellenec, in 1571, he enrolled as an attorney in Paris, and continued to visit his student Catherine. He regularly lived in Fontenay-le-Comte, where he took on some municipal functions and he began publishing his Universalium inspectionum ad canonem mathematicum liber singularis and wrote new mathematical research by night or during periods of leisure. He was known to dwell on any one question for up to three days, his elbow on the desk, feeding himself without changing position, in 1572, Viete was in Paris during the St. Bartholomews Day massacre. That night, Baron De Quellenec was killed after having tried to save Admiral Coligny the previous night, the same year, Viete met Françoise de Rohan, Lady of Garnache, and became her adviser against Jacques, Duke of Nemours. In 1576, Henri, duc de Rohan took him under his special protection, in 1579, Viete printed his canonem mathematicum. A year later, he was appointed maître des requêtes to the parliament of Paris and that same year, his success in the trial between the Duke of Nemours and Françoise de Rohan, to the benefit of the latter, earned him the resentment of the tenacious Catholic League. Between 1583 and 1585, the League persuaded Henry III to release Viete, Viete having been accused of sympathy with the Protestant cause. Henry of Navarre, at Rohans instigation, addressed two letters to King Henry III of France on March 3 and April 26,1585, in an attempt to obtain Vietes restoration to his former office, Vieta retired to Fontenay and Beauvoir-sur-Mer, with François de Rohan. He spent four years devoted to mathematics, writing his Analytical Art or New Algebra, in 1589, Henry III took refuge in Blois. He commanded the officials to be at Tours before 15 April 1589
Franciscus Vieta
–
François Viète, French mathematician
Franciscus Vieta
–
Opera, 1646
136.
Herbert Robbins
–
Herbert Ellis Robbins was an American mathematician and statistician. He did research in topology, measure theory, statistics, and he was the co-author, with Richard Courant, of What is Mathematics. A popularization that is still in print, the Robbins lemma, used in empirical Bayes methods, is named after him. Robbins algebras are named after him because of a conjecture that he posed concerning Boolean algebras, the Robbins theorem, in graph theory, is also named after him, as is the Whitney–Robbins synthesis, a tool he introduced to prove this theorem. Robbins was born in New Castle, Pennsylvania, as an undergraduate, Robbins attended Harvard University, where Marston Morse influenced him to become interested in mathematics. Robbins received a doctorate from Harvard in 1938 under the supervision of Hassler Whitney and was an instructor at New York University from 1939 to 1941, in 1953, he became a professor of mathematical statistics at Columbia University. He retired from full-time activity at Columbia in 1985 and was then a professor at Rutgers University until his retirement in 1997 and he has 567 descendants listed at the Mathematics Genealogy Project. In 1955, Robbins introduced empirical Bayes methods at the Third Berkeley Symposium on Mathematical Statistics, Robbins was also one of the inventors of the first stochastic approximation algorithm, the Robbins-Monro method, and worked on the theory of power-one tests and optimal stopping. These policies were simplified in the 1995 paper Sequential choice from several populations and he was a member of the National Academy of Sciences and the American Academy of Arts and Sciences and was past president of the Institute of Mathematical Statistics. Books by Herbert Robbins What is Mathematics, an Elementary Approach to Ideas and Methods, with Richard Courant, London, Oxford University Press,1941. Great Expectations, The Theory of Optimal Stopping, with Y. S. Chow, introduction to Statistics, with John Van Ryzin, Science Research Associates,1975. Articles A theorem on graphs with an application to a problem on traffic control, American Mathematical Monthly, the central limit theorem for dependent random variables, with Wassily Hoeffding, Duke Mathematical Journal, vol. A stochastic approximation method, with Sutton Monro, Annals of Mathematical Statistics, some aspects of the sequential design of experiments, in Bulletin of the American Mathematical Society, vol. Two-stage procedures for estimating the difference between means, with Ghurye, SG, Biometrika,41, 146-152,1954, the strong law of large numbers when the first moment does not exist, with C. Derman, in the Proceedings of the National Academy of Sciences of the United States of America, an empirical Bayes approach to statistics, in Proceedings of the Third Berkeley Symposium on Mathematical Statistics and Probability, Jerzy Neyman, ed. vol. 1, Berkeley, California, University of California Press,1956, on the asymptotic theory of fixed-width sequential confidence intervals for the mean, with Chow, Y. S. The Annals of Mathematical Statistics,36, 457-462,1965, Statistical methods related to the law of the iterated logarithm, The Annals of Mathematical Statistics,41, 1397–1409,1970. Optimal stopping, The American Mathematical Monthly,77, 333-343,1970, a convergence theorem for nonnegative almost supermartingales and some applications, with David Siegmund, Optimizing methods in statistics, 233–257,1971
Herbert Robbins
–
Herbert Robbins visiting Purdue in 1966
137.
Clarendon Press
–
Oxford University Press is the largest university press in the world, and the second oldest after Cambridge University Press. It is a department of the University of Oxford and is governed by a group of 15 academics appointed by the known as the delegates of the press. They are headed by the secretary to the delegates, who serves as OUPs chief executive, Oxford University has used a similar system to oversee OUP since the 17th century. The university became involved in the print trade around 1480, and grew into a printer of Bibles, prayer books. OUP took on the project became the Oxford English Dictionary in the late 19th century. Moves into international markets led to OUP opening its own offices outside the United Kingdom, by contracting out its printing and binding operations, the modern OUP publishes some 6,000 new titles around the world each year. OUP was first exempted from United States corporation tax in 1972, as a department of a charity, OUP is exempt from income tax and corporate tax in most countries, but may pay sales and other commercial taxes on its products. The OUP today transfers 30% of its surplus to the rest of the university. OUP is the largest university press in the world by the number of publications, publishing more than 6,000 new books every year, the Oxford University Press Museum is located on Great Clarendon Street, Oxford. Visits must be booked in advance and are led by a member of the archive staff, displays include a 19th-century printing press, the OUP buildings, and the printing and history of the Oxford Almanack, Alice in Wonderland and the Oxford English Dictionary. The first printer associated with Oxford University was Theoderic Rood, the first book printed in Oxford, in 1478, an edition of Rufinuss Expositio in symbolum apostolorum, was printed by another, anonymous, printer. Famously, this was mis-dated in Roman numerals as 1468, thus apparently pre-dating Caxton, roods printing included John Ankywylls Compendium totius grammaticae, which set new standards for teaching of Latin grammar. After Rood, printing connected with the university remained sporadic for over half a century, the chancellor, Robert Dudley, 1st Earl of Leicester, pleaded Oxfords case. Some royal assent was obtained, since the printer Joseph Barnes began work, Oxfords chancellor, Archbishop William Laud, consolidated the legal status of the universitys printing in the 1630s. Laud envisaged a unified press of world repute, Oxford would establish it on university property, govern its operations, employ its staff, determine its printed work, and benefit from its proceeds. To that end, he petitioned Charles I for rights that would enable Oxford to compete with the Stationers Company and the Kings Printer and these were brought together in Oxfords Great Charter in 1636, which gave the university the right to print all manner of books. Laud also obtained the privilege from the Crown of printing the King James or Authorized Version of Scripture at Oxford and this privilege created substantial returns in the next 250 years, although initially it was held in abeyance. The Stationers Company was deeply alarmed by the threat to its trade, under this, the Stationers paid an annual rent for the university not to exercise its full printing rights – money Oxford used to purchase new printing equipment for smaller purposes
Clarendon Press
–
Oxford University Press on Walton Street.
Clarendon Press
–
Oxford University Press
Clarendon Press
–
2008 conference booth
138.
Jan Gullberg
–
Jan Gullberg was a Swedish surgeon and anaesthesiologist, but became known as a writer on popular science and medical topics. He is best known outside Sweden as the author of Mathematics, From the Birth of Numbers, Gullberg grew up and was trained as a surgeon in Sweden. He qualified in medicine at the University of Lund in 1964 and he practised as a surgeon in Saudi Arabia, Norway and Virginia Mason Hospital, Seattle in the United States, as well as in Sweden. Gullberg saw himself as a rather than a writer. His first book, on science, won the Swedish Medical Societys Jubilee Prize in 1980 and he died of a stroke in Nordfjordeid, Norway at the hospital where he was working. He was twice married, first to Anne-Marie Hallin, with whom he had three children, and Ann, with whom he adopted two sons, Gullbergs second book, Mathematics, From the Birth of Numbers, took ten years to write, consuming all of Gullbergs spare time. It proved a success, its first edition of 17,000 copies was virtually sold out within six months. I take it with me everywhere I go, allen says the book has special charm, making innovative use of the margin and providing excellent quotes and quips throughout. His favourite chapter is Cornerstones of Mathematics which he believes should appeal both to beginners and old hands and he admires the efficient Babylonian method of finding square roots, using division and averaging. He learns from Gullberg how to multiply and divide using an abacus, allen is delighted by the chapter on combinatorics, with its approach to graph theory and magic squares, complete with 1740 map of the seven bridges of Königsberg. And he loved the chapter on probability and he records that he finds its introductory accounts useful for engineers who use maths only occasionally, and suggests how the book could be used for undergraduate students. He concludes by describing the book as gigantic, in every sense and was 10 years in the making, and calls it a giant leap forward for mathematics and all those who love it. The book was reviewed in Scientific American, but more reservedly in New Scientist. Kevin Kelly comments that the book is an able to provide answers on obscure mathematical concepts, in his view The book has wit and humor. Gullberg commented At the start no real mathematician would accept my book, and perhaps it was a bit crazy of me to write a book on mathematics, as it would be for a mathematician to write a book on surgery. Vätska Gas Energi – Kemi och Fysik med tillämpningar i vätskebalans-, blodgas- och näringslära Kiruna
Jan Gullberg
–
Jan Gullberg
139.
BBC
–
The British Broadcasting Corporation is a British public service broadcaster headquartered at Broadcasting House in London, England. The total number of staff is 35,402 when part-time, flexible, the BBC is established under a Royal Charter and operates under its Agreement with the Secretary of State for Culture, Media and Sport. The fee is set by the British Government, agreed by Parliament, and used to fund the BBCs radio, TV, britains first live public broadcast from the Marconi factory in Chelmsford took place in June 1920. It was sponsored by the Daily Mails Lord Northcliffe and featured the famous Australian Soprano Dame Nellie Melba, the Melba broadcast caught the peoples imagination and marked a turning point in the British publics attitude to radio. However, this public enthusiasm was not shared in official circles where such broadcasts were held to interfere with important military and civil communications. By late 1920, pressure from these quarters and uneasiness among the staff of the licensing authority, the General Post Office, was sufficient to lead to a ban on further Chelmsford broadcasts. But by 1922, the GPO had received nearly 100 broadcast licence requests, John Reith, a Scottish Calvinist, was appointed its General Manager in December 1922 a few weeks after the company made its first official broadcast. The company was to be financed by a royalty on the sale of BBC wireless receiving sets from approved manufacturers, to this day, the BBC aims to follow the Reithian directive to inform, educate and entertain. The financial arrangements soon proved inadequate, set sales were disappointing as amateurs made their own receivers and listeners bought rival unlicensed sets. By mid-1923, discussions between the GPO and the BBC had become deadlocked and the Postmaster-General commissioned a review of broadcasting by the Sykes Committee and this was to be followed by a simple 10 shillings licence fee with no royalty once the wireless manufactures protection expired. The BBCs broadcasting monopoly was made explicit for the duration of its current broadcast licence, the BBC was also banned from presenting news bulletins before 19.00, and required to source all news from external wire services. Mid-1925 found the future of broadcasting under further consideration, this time by the Crawford committee, by now the BBC under Reiths leadership had forged a consensus favouring a continuation of the unified broadcasting service, but more money was still required to finance rapid expansion. Wireless manufacturers were anxious to exit the loss making consortium with Reith keen that the BBC be seen as a service rather than a commercial enterprise. The recommendations of the Crawford Committee were published in March the following year and were still under consideration by the GPO when the 1926 general strike broke out in May. The strike temporarily interrupted newspaper production and with restrictions on news bulletins waived the BBC suddenly became the source of news for the duration of the crisis. The crisis placed the BBC in a delicate position, the Government was divided on how to handle the BBC but ended up trusting Reith, whose opposition to the strike mirrored the PMs own. Thus the BBC was granted sufficient leeway to pursue the Governments objectives largely in a manner of its own choosing, supporters of the strike nicknamed the BBC the BFC for British Falsehood Company. Reith personally announced the end of the strike which he marked by reciting from Blakes Jerusalem signifying that England had been saved, Reith argued that trust gained by authentic impartial news could then be used
BBC
–
BBC Television Centre at White City, West London, which opened in 1960 and closed in 2013
BBC
–
BBC Pacific Quay in Glasgow, which was opened in 2007
BBC
–
BBC New Broadcasting House, London which came into use during 2012–13.
BBC
–
The headquarters of the BBC at Broadcasting House in Portland Place, London, England. This section of the building is called 'Old Broadcasting House'.
140.
Wiki
–
A wiki is a website that provides collaborative modification of its content and structure directly from the web browser. In a typical wiki, text is written using a markup language. A wiki is run using wiki software, otherwise known as a wiki engine, there are dozens of different wiki engines in use, both standalone and part of other software, such as bug tracking systems. Some wiki engines are open source, whereas others are proprietary, some permit control over different functions, for example, editing rights may permit changing, adding or removing material. Others may permit access without enforcing access control, other rules may also be imposed to organize content. Wikipedia is not a wiki but rather a collection of hundreds of wikis. There are at least tens of thousands of other wikis in use, both public and private, including functioning as knowledge management resources, notetaking tools, community websites. The English-language Wikipedia has the largest collection of articles, as of September 2016, ward Cunningham, the developer of the first wiki software, WikiWikiWeb, originally described it as the simplest online database that could possibly work. Wiki is a Hawaiian word meaning quick, Wiki promotes meaningful topic associations between different pages by making page link creation intuitively easy and showing whether an intended target page exists or not. A wiki is not a carefully crafted site created by experts and professional writers, instead, it seeks to involve the typical visitor/user in an ongoing process of creation and collaboration that constantly changes the website landscape. A wiki enables communities of editors and contributors to write documents collaboratively, all that people require to contribute is a computer, Internet access, a web browser and a basic understanding of a simple markup language. A single page in a website is referred to as a wiki page, while the entire collection of pages. A wiki is essentially a database for creating, browsing, a wiki allows non-linear, evolving, complex and networked text, while also allowing for editor argument, debate and interaction regarding the content and formatting. A defining characteristic of wiki technology is the ease with which pages can be created and updated, generally, there is no review by a moderator or gatekeeper before modifications are accepted and thus lead to changes on the website. Many wikis are open to alteration by the public without requiring registration of user accounts. Many edits can be made in real-time and appear almost instantly online, however, this feature facilitates abuse of the system. Private wiki servers require user authentication to edit pages, and sometimes even to read them, maged N. Kamel Boulos, Cito Maramba and Steve Wheeler write that the open wikis produce a process of Social Darwinism. Unfit sentences and sections are ruthlessly culled, edited and replaced if they are not considered fit, while such openness may invite vandalism and the posting of untrue information, this same openness also makes it possible to rapidly correct or restore a quality wiki page
Wiki
–
Ward Cunningham, inventor of the wiki
Wiki
Wiki
–
Wiki Wiki Shuttle at Honolulu International Airport
Wiki
–
Homepage of Wikipedia
141.
Dynamical systems theory
–
Dynamical systems theory is an area of mathematics used to describe the behavior of complex dynamical systems, usually by employing differential equations or difference equations. When differential equations are employed, the theory is called continuous dynamical systems, when difference equations are employed, the theory is called discrete dynamical systems. Some situations may also be modeled by mixed operators, such as differential-difference equations, much of modern research is focused on the study of chaotic systems. This field of study is called just dynamical systems, mathematical dynamical systems theory or the mathematical theory of dynamical systems. Dynamical systems theory and chaos theory deal with the qualitative behavior of dynamical systems. Or Does the long-term behavior of the system depend on its initial condition, an important goal is to describe the fixed points, or steady states of a given dynamical system, these are values of the variable that dont change over time. Some of these points are attractive, meaning that if the system starts out in a nearby state. Similarly, one is interested in points, states of the system that repeat after several timesteps. Periodic points can also be attractive, sharkovskiis theorem is an interesting statement about the number of periodic points of a one-dimensional discrete dynamical system. Even simple nonlinear dynamical systems often exhibit seemingly random behavior that has been called chaos, the branch of dynamical systems that deals with the clean definition and investigation of chaos is called chaos theory. The concept of systems theory has its origins in Newtonian mechanics. Before the advent of fast computing machines, solving a system required sophisticated mathematical techniques. Some excellent presentations of mathematical dynamic system theory include, and, the dynamical system concept is a mathematical formalization for any fixed rule that describes the time dependence of a points position in its ambient space. Examples include the models that describe the swinging of a clock pendulum, the flow of water in a pipe. A dynamical system has a state determined by a collection of real numbers, small changes in the state of the system correspond to small changes in the numbers. The numbers are also the coordinates of a geometrical space—a manifold, the evolution rule of the dynamical system is a fixed rule that describes what future states follow from the current state. The rule may be deterministic or stochastic and it argues that differential equations are more suited to modelling cognition than more traditional computer models. In mathematics, a system is a system that is not linear—i. e
Dynamical systems theory
–
The Lorenz attractor is an example of a non-linear dynamical system. Studying this system helped give rise to chaos theory.
142.
Discrete mathematics
–
Discrete mathematics is the study of mathematical structures that are fundamentally discrete rather than continuous. Discrete mathematics therefore excludes topics in mathematics such as calculus. Discrete objects can often be enumerated by integers, more formally, discrete mathematics has been characterized as the branch of mathematics dealing with countable sets. However, there is no definition of the term discrete mathematics. Indeed, discrete mathematics is described less by what is included than by what is excluded, continuously varying quantities, the set of objects studied in discrete mathematics can be finite or infinite. The term finite mathematics is sometimes applied to parts of the field of mathematics that deals with finite sets. Conversely, computer implementations are significant in applying ideas from mathematics to real-world problems. Although the main objects of study in mathematics are discrete objects. In university curricula, Discrete Mathematics appeared in the 1980s, initially as a computer science support course, some high-school-level discrete mathematics textbooks have appeared as well. At this level, discrete mathematics is seen as a preparatory course. The Fulkerson Prize is awarded for outstanding papers in discrete mathematics, the history of discrete mathematics has involved a number of challenging problems which have focused attention within areas of the field. In graph theory, much research was motivated by attempts to prove the four color theorem, first stated in 1852, in logic, the second problem on David Hilberts list of open problems presented in 1900 was to prove that the axioms of arithmetic are consistent. Gödels second incompleteness theorem, proved in 1931, showed that this was not possible – at least not within arithmetic itself, Hilberts tenth problem was to determine whether a given polynomial Diophantine equation with integer coefficients has an integer solution. In 1970, Yuri Matiyasevich proved that this could not be done, at the same time, military requirements motivated advances in operations research. The Cold War meant that cryptography remained important, with fundamental advances such as public-key cryptography being developed in the following decades, operations research remained important as a tool in business and project management, with the critical path method being developed in the 1950s. The telecommunication industry has also motivated advances in mathematics, particularly in graph theory. Formal verification of statements in logic has been necessary for development of safety-critical systems. Computational geometry has been an important part of the computer graphics incorporated into modern video games, currently, one of the most famous open problems in theoretical computer science is the P = NP problem, which involves the relationship between the complexity classes P and NP
Discrete mathematics
–
Graphs like this are among the objects studied by discrete mathematics, for their interesting mathematical properties, their usefulness as models of real-world problems, and their importance in developing computer algorithms.
143.
Integrated Authority File
–
The Integrated Authority File or GND is an international authority file for the organisation of personal names, subject headings and corporate bodies from catalogues. It is used mainly for documentation in libraries and increasingly also by archives, the GND is managed by the German National Library in cooperation with various regional library networks in German-speaking Europe and other partners. The GND falls under the Creative Commons Zero license, the GND specification provides a hierarchy of high-level entities and sub-classes, useful in library classification, and an approach to unambiguous identification of single elements. It also comprises an ontology intended for knowledge representation in the semantic web, available in the RDF format
Integrated Authority File
–
GND screenshot
144.
National Diet Library
–
The National Diet Library is the only national library in Japan. It was established in 1948 for the purpose of assisting members of the National Diet of Japan in researching matters of public policy, the library is similar in purpose and scope to the United States Library of Congress. The National Diet Library consists of two facilities in Tokyo and Kyoto, and several other branch libraries throughout Japan. The Diets power in prewar Japan was limited, and its need for information was correspondingly small, the original Diet libraries never developed either the collections or the services which might have made them vital adjuncts of genuinely responsible legislative activity. Until Japans defeat, moreover, the executive had controlled all political documents, depriving the people and the Diet of access to vital information. The U. S. occupation forces under General Douglas MacArthur deemed reform of the Diet library system to be an important part of the democratization of Japan after its defeat in World War II. In 1946, each house of the Diet formed its own National Diet Library Standing Committee, hani Gorō, a Marxist historian who had been imprisoned during the war for thought crimes and had been elected to the House of Councillors after the war, spearheaded the reform efforts. Hani envisioned the new body as both a citadel of popular sovereignty, and the means of realizing a peaceful revolution, the National Diet Library opened in June 1948 in the present-day State Guest-House with an initial collection of 100,000 volumes. The first Librarian of the Diet Library was the politician Tokujirō Kanamori, the philosopher Masakazu Nakai served as the first Vice Librarian. In 1949, the NDL merged with the National Library and became the national library in Japan. At this time the collection gained a million volumes previously housed in the former National Library in Ueno. In 1961, the NDL opened at its present location in Nagatachō, in 1986, the NDLs Annex was completed to accommodate a combined total of 12 million books and periodicals. The Kansai-kan, which opened in October 2002 in the Kansai Science City, has a collection of 6 million items, in May 2002, the NDL opened a new branch, the International Library of Childrens Literature, in the former building of the Imperial Library in Ueno. This branch contains some 400,000 items of literature from around the world. Though the NDLs original mandate was to be a library for the National Diet. In the fiscal year ending March 2004, for example, the library reported more than 250,000 reference inquiries, in contrast, as Japans national library, the NDL collects copies of all publications published in Japan. The NDL has an extensive collection of some 30 million pages of documents relating to the Occupation of Japan after World War II. This collection include the documents prepared by General Headquarters and the Supreme Commander of the Allied Powers, the Far Eastern Commission, the NDL maintains a collection of some 530,000 books and booklets and 2 million microform titles relating to the sciences
National Diet Library
–
Tokyo Main Library of the National Diet Library
National Diet Library
–
Kansai-kan of the National Diet Library
National Diet Library
–
The National Diet Library
National Diet Library
–
Main building in Tokyo
145.
Mathematics
–
Mathematics is the study of topics such as quantity, structure, space, and change. There is a range of views among mathematicians and philosophers as to the exact scope, Mathematicians seek out patterns and use them to formulate new conjectures. Mathematicians resolve the truth or falsity of conjectures by mathematical proof, when mathematical structures are good models of real phenomena, then mathematical reasoning can provide insight or predictions about nature. Through the use of abstraction and logic, mathematics developed from counting, calculation, measurement, practical mathematics has been a human activity from as far back as written records exist. The research required to solve mathematical problems can take years or even centuries of sustained inquiry, rigorous arguments first appeared in Greek mathematics, most notably in Euclids Elements. Galileo Galilei said, The universe cannot be read until we have learned the language and it is written in mathematical language, and the letters are triangles, circles and other geometrical figures, without which means it is humanly impossible to comprehend a single word. Without these, one is wandering about in a dark labyrinth, carl Friedrich Gauss referred to mathematics as the Queen of the Sciences. Benjamin Peirce called mathematics the science that draws necessary conclusions, David Hilbert said of mathematics, We are not speaking here of arbitrariness in any sense. Mathematics is not like a game whose tasks are determined by arbitrarily stipulated rules, rather, it is a conceptual system possessing internal necessity that can only be so and by no means otherwise. Albert Einstein stated that as far as the laws of mathematics refer to reality, they are not certain, Mathematics is essential in many fields, including natural science, engineering, medicine, finance and the social sciences. Applied mathematics has led to entirely new mathematical disciplines, such as statistics, Mathematicians also engage in pure mathematics, or mathematics for its own sake, without having any application in mind. There is no clear line separating pure and applied mathematics, the history of mathematics can be seen as an ever-increasing series of abstractions. The earliest uses of mathematics were in trading, land measurement, painting and weaving patterns, in Babylonian mathematics elementary arithmetic first appears in the archaeological record. Numeracy pre-dated writing and numeral systems have many and diverse. Between 600 and 300 BC the Ancient Greeks began a study of mathematics in its own right with Greek mathematics. Mathematics has since been extended, and there has been a fruitful interaction between mathematics and science, to the benefit of both. Mathematical discoveries continue to be made today, the overwhelming majority of works in this ocean contain new mathematical theorems and their proofs. The word máthēma is derived from μανθάνω, while the modern Greek equivalent is μαθαίνω, in Greece, the word for mathematics came to have the narrower and more technical meaning mathematical study even in Classical times
Mathematics
–
Euclid (holding calipers), Greek mathematician, 3rd century BC, as imagined by Raphael in this detail from The School of Athens.
Mathematics
–
Greek mathematician Pythagoras (c. 570 – c. 495 BC), commonly credited with discovering the Pythagorean theorem
Mathematics
–
Leonardo Fibonacci, the Italian mathematician who established the Hindu–Arabic numeral system to the Western World
Mathematics
–
Carl Friedrich Gauss, known as the prince of mathematicians