1.
Outline of trigonometry
–
Trigonometry is a branch of mathematics that studies the relationships between the sides and the angles in triangles. Trigonometry defines the trigonometric functions, which describe those relationships and have applicability to cyclical phenomena, geometry – mathematics concerned with questions of shape, size, relative position of figures, and the properties of space. Geometry is used extensively in trigonometry, angle – angle is the figure formed by two rays, called the sides of the angle, sharing a common endpoint, called the vertex of the angle. Angles formed by two rays lie in a plane, but this plane does not have to be a Euclidean plane. D, ebook version, in PDF format, full text presented. Trigonometry by Alfred Monroe Kenyon and Louis Ingold, The Macmillan Company,1914, Trigonometry FAQ Trigonometry on Mathwords. com index of trigonometry entries on Mathwords. com Trigonometry on PlainMath. net Trigonometry Articles from PlainMath. Net
Outline of trigonometry
–
∠, the angle symbol in Unicode is U+2220
2.
Unit circle
–
In mathematics, a unit circle is a circle with a radius of one. Frequently, especially in trigonometry, the circle is the circle of radius one centered at the origin in the Cartesian coordinate system in the Euclidean plane. The unit circle is often denoted S1, the generalization to higher dimensions is the unit sphere, if is a point on the unit circles circumference, then | x | and | y | are the lengths of the legs of a right triangle whose hypotenuse has length 1. Thus, by the Pythagorean theorem, x and y satisfy the equation x 2 + y 2 =1. The interior of the circle is called the open unit disk. One may also use other notions of distance to define other unit circles, such as the Riemannian circle, see the article on mathematical norms for additional examples. The unit circle can be considered as the complex numbers. In quantum mechanics, this is referred to as phase factor, the equation x2 + y2 =1 gives the relation cos 2 + sin 2 =1. The unit circle also demonstrates that sine and cosine are periodic functions, triangles constructed on the unit circle can also be used to illustrate the periodicity of the trigonometric functions. First, construct a radius OA from the origin to a point P on the circle such that an angle t with 0 < t < π/2 is formed with the positive arm of the x-axis. Now consider a point Q and line segments PQ ⊥ OQ, the result is a right triangle △OPQ with ∠QOP = t. Because PQ has length y1, OQ length x1, and OA length 1, sin = y1 and cos = x1. Having established these equivalences, take another radius OR from the origin to a point R on the circle such that the same angle t is formed with the arm of the x-axis. Now consider a point S and line segments RS ⊥ OS, the result is a right triangle △ORS with ∠SOR = t. It can hence be seen that, because ∠ROQ = π − t, R is at in the way that P is at. The conclusion is that, since is the same as and is the same as, it is true that sin = sin and it may be inferred in a similar manner that tan = −tan, since tan = y1/x1 and tan = y1/−x1. A simple demonstration of the above can be seen in the equality sin = sin = 1/√2, when working with right triangles, sine, cosine, and other trigonometric functions only make sense for angle measures more than zero and less than π/2. However, when defined with the circle, these functions produce meaningful values for any real-valued angle measure – even those greater than 2π
Unit circle
–
Illustration of a unit circle. The variable t is an angle measure.
3.
Ancient Greek
–
Ancient Greek includes the forms of Greek used in ancient Greece and the ancient world from around the 9th century BC to the 6th century AD. It is often divided into the Archaic period, Classical period. It is antedated in the second millennium BC by Mycenaean Greek, the language of the Hellenistic phase is known as Koine. Koine is regarded as a historical stage of its own, although in its earliest form it closely resembled Attic Greek. Prior to the Koine period, Greek of the classic and earlier periods included several regional dialects, Ancient Greek was the language of Homer and of fifth-century Athenian historians, playwrights, and philosophers. It has contributed many words to English vocabulary and has been a subject of study in educational institutions of the Western world since the Renaissance. This article primarily contains information about the Epic and Classical phases of the language, Ancient Greek was a pluricentric language, divided into many dialects. The main dialect groups are Attic and Ionic, Aeolic, Arcadocypriot, some dialects are found in standardized literary forms used in literature, while others are attested only in inscriptions. There are also several historical forms, homeric Greek is a literary form of Archaic Greek used in the epic poems, the Iliad and Odyssey, and in later poems by other authors. Homeric Greek had significant differences in grammar and pronunciation from Classical Attic, the origins, early form and development of the Hellenic language family are not well understood because of a lack of contemporaneous evidence. Several theories exist about what Hellenic dialect groups may have existed between the divergence of early Greek-like speech from the common Proto-Indo-European language and the Classical period and they have the same general outline, but differ in some of the detail. The invasion would not be Dorian unless the invaders had some relationship to the historical Dorians. The invasion is known to have displaced population to the later Attic-Ionic regions, the Greeks of this period believed there were three major divisions of all Greek people—Dorians, Aeolians, and Ionians, each with their own defining and distinctive dialects. Often non-west is called East Greek, Arcadocypriot apparently descended more closely from the Mycenaean Greek of the Bronze Age. Boeotian had come under a strong Northwest Greek influence, and can in some respects be considered a transitional dialect, thessalian likewise had come under Northwest Greek influence, though to a lesser degree. Most of the dialect sub-groups listed above had further subdivisions, generally equivalent to a city-state and its surrounding territory, Doric notably had several intermediate divisions as well, into Island Doric, Southern Peloponnesus Doric, and Northern Peloponnesus Doric. The Lesbian dialect was Aeolic Greek and this dialect slowly replaced most of the older dialects, although Doric dialect has survived in the Tsakonian language, which is spoken in the region of modern Sparta. Doric has also passed down its aorist terminations into most verbs of Demotic Greek, by about the 6th century AD, the Koine had slowly metamorphosized into Medieval Greek
Ancient Greek
–
Inscription about the construction of the statue of Athena Parthenos in the Parthenon, 440/439 BC
Ancient Greek
–
Ostracon bearing the name of Cimon, Stoa of Attalos
Ancient Greek
–
The words ΜΟΛΩΝ ΛΑΒΕ as they are inscribed on the marble of the 1955 Leonidas Monument at Thermopylae
4.
Angle
–
In planar geometry, an angle is the figure formed by two rays, called the sides of the angle, sharing a common endpoint, called the vertex of the angle. Angles formed by two rays lie in a plane, but this plane does not have to be a Euclidean plane, Angles are also formed by the intersection of two planes in Euclidean and other spaces. Angles formed by the intersection of two curves in a plane are defined as the angle determined by the tangent rays at the point of intersection. Similar statements hold in space, for example, the angle formed by two great circles on a sphere is the dihedral angle between the planes determined by the great circles. Angle is also used to designate the measure of an angle or of a rotation and this measure is the ratio of the length of a circular arc to its radius. In the case of an angle, the arc is centered at the vertex. In the case of a rotation, the arc is centered at the center of the rotation and delimited by any other point and its image by the rotation. The word angle comes from the Latin word angulus, meaning corner, cognate words are the Greek ἀγκύλος, meaning crooked, curved, both are connected with the Proto-Indo-European root *ank-, meaning to bend or bow. Euclid defines a plane angle as the inclination to each other, in a plane, according to Proclus an angle must be either a quality or a quantity, or a relationship. In mathematical expressions, it is common to use Greek letters to serve as variables standing for the size of some angle, lower case Roman letters are also used, as are upper case Roman letters in the context of polygons. See the figures in this article for examples, in geometric figures, angles may also be identified by the labels attached to the three points that define them. For example, the angle at vertex A enclosed by the rays AB, sometimes, where there is no risk of confusion, the angle may be referred to simply by its vertex. However, in geometrical situations it is obvious from context that the positive angle less than or equal to 180 degrees is meant. Otherwise, a convention may be adopted so that ∠BAC always refers to the angle from B to C. Angles smaller than an angle are called acute angles. An angle equal to 1/4 turn is called a right angle, two lines that form a right angle are said to be normal, orthogonal, or perpendicular. Angles larger than an angle and smaller than a straight angle are called obtuse angles. An angle equal to 1/2 turn is called a straight angle, Angles larger than a straight angle but less than 1 turn are called reflex angles
Angle
–
An angle enclosed by rays emanating from a vertex.
5.
Triangle
–
A triangle is a polygon with three edges and three vertices. It is one of the shapes in geometry. A triangle with vertices A, B, and C is denoted △ A B C, in Euclidean geometry any three points, when non-collinear, determine a unique triangle and a unique plane. This article is about triangles in Euclidean geometry except where otherwise noted, triangles can be classified according to the lengths of their sides, An equilateral triangle has all sides the same length. An equilateral triangle is also a polygon with all angles measuring 60°. An isosceles triangle has two sides of equal length, some mathematicians define an isosceles triangle to have exactly two equal sides, whereas others define an isosceles triangle as one with at least two equal sides. The latter definition would make all equilateral triangles isosceles triangles, the 45–45–90 right triangle, which appears in the tetrakis square tiling, is isosceles. A scalene triangle has all its sides of different lengths, equivalently, it has all angles of different measure. Hatch marks, also called tick marks, are used in diagrams of triangles, a side can be marked with a pattern of ticks, short line segments in the form of tally marks, two sides have equal lengths if they are both marked with the same pattern. In a triangle, the pattern is no more than 3 ticks. Similarly, patterns of 1,2, or 3 concentric arcs inside the angles are used to indicate equal angles, triangles can also be classified according to their internal angles, measured here in degrees. A right triangle has one of its interior angles measuring 90°, the side opposite to the right angle is the hypotenuse, the longest side of the triangle. The other two sides are called the legs or catheti of the triangle, special right triangles are right triangles with additional properties that make calculations involving them easier. One of the two most famous is the 3–4–5 right triangle, where 32 +42 =52, in this situation,3,4, and 5 are a Pythagorean triple. The other one is a triangle that has 2 angles that each measure 45 degrees. Triangles that do not have an angle measuring 90° are called oblique triangles, a triangle with all interior angles measuring less than 90° is an acute triangle or acute-angled triangle. If c is the length of the longest side, then a2 + b2 > c2, a triangle with one interior angle measuring more than 90° is an obtuse triangle or obtuse-angled triangle. If c is the length of the longest side, then a2 + b2 < c2, a triangle with an interior angle of 180° is degenerate
Triangle
–
The Flatiron Building in New York is shaped like a triangular prism
Triangle
–
A triangle
6.
Hellenistic period
–
It is often considered a period of transition, sometimes even of decadence or degeneration, compared to the enlightenment of the Greek Classical era. The Hellenistic period saw the rise of New Comedy, Alexandrian poetry, the Septuagint, Greek science was advanced by the works of the mathematician Euclid and the polymath Archimedes. The religious sphere expanded to include new gods such as the Greco-Egyptian Serapis, eastern deities such as Attis and Cybele, the Hellenistic period was characterized by a new wave of Greek colonization which established Greek cities and kingdoms in Asia and Africa. This resulted in the export of Greek culture and language to new realms. Equally, however, these new kingdoms were influenced by the cultures, adopting local practices where beneficial, necessary. Hellenistic culture thus represents a fusion of the Ancient Greek world with that of the Near East, Middle East and this mixture gave rise to a common Attic-based Greek dialect, known as Koine Greek, which became the lingua franca through the Hellenistic world. Scholars and historians are divided as to what event signals the end of the Hellenistic era, Hellenistic is distinguished from Hellenic in that the first encompasses the entire sphere of direct ancient Greek influence, while the latter refers to Greece itself. The word originated from the German term hellenistisch, from Ancient Greek Ἑλληνιστής, from Ἑλλάς, Hellenistic is a modern word and a 19th-century concept, the idea of a Hellenistic period did not exist in Ancient Greece. Although words related in form or meaning, e. g, the major issue with the term Hellenistic lies in its convenience, as the spread of Greek culture was not the generalized phenomenon that the term implies. Some areas of the world were more affected by Greek influences than others. The Greek population and the population did not always mix, the Greeks moved and brought their own culture. While a few fragments exist, there is no surviving historical work which dates to the hundred years following Alexanders death. The works of the major Hellenistic historians Hieronymus of Cardia, Duris of Samos, the earliest and most credible surviving source for the Hellenistic period is Polybius of Megalopolis, a statesman of the Achaean League until 168 BC when he was forced to go to Rome as a hostage. His Histories eventually grew to a length of forty books, covering the years 220 to 167 BC, another important source, Plutarchs Parallel Lives though more preoccupied with issues of personal character and morality, outlines the history of important Hellenistic figures. Appian of Alexandria wrote a history of the Roman empire that includes information of some Hellenistic kingdoms, other sources include Justins epitome of Pompeius Trogus Historiae Philipicae and a summary of Arrians Events after Alexander, by Photios I of Constantinople. Lesser supplementary sources include Curtius Rufus, Pausanias, Pliny, in the field of philosophy, Diogenes Laertius Lives and Opinions of Eminent Philosophers is the main source. Ancient Greece had traditionally been a collection of fiercely independent city-states. After the Peloponnesian War, Greece had fallen under a Spartan hegemony, in which Sparta was pre-eminent but not all-powerful
Hellenistic period
–
The Nike of Samothrace is considered one of the greatest masterpieces of Hellenistic art.
Hellenistic period
–
Alexander fighting the Persian king Darius III. From the Alexander Mosaic, Naples National Archaeological Museum.
Hellenistic period
–
Alexander's empire at the time of its maximum expansion.
7.
Geometry
–
Geometry is a branch of mathematics concerned with questions of shape, size, relative position of figures, and the properties of space. A mathematician who works in the field of geometry is called a geometer, Geometry arose independently in a number of early cultures as a practical way for dealing with lengths, areas, and volumes. Geometry began to see elements of mathematical science emerging in the West as early as the 6th century BC. By the 3rd century BC, geometry was put into a form by Euclid, whose treatment, Euclids Elements. Geometry arose independently in India, with texts providing rules for geometric constructions appearing as early as the 3rd century BC, islamic scientists preserved Greek ideas and expanded on them during the Middle Ages. By the early 17th century, geometry had been put on a solid footing by mathematicians such as René Descartes. Since then, and into modern times, geometry has expanded into non-Euclidean geometry and manifolds, while geometry has evolved significantly throughout the years, there are some general concepts that are more or less fundamental to geometry. These include the concepts of points, lines, planes, surfaces, angles, contemporary geometry has many subfields, Euclidean geometry is geometry in its classical sense. The mandatory educational curriculum of the majority of nations includes the study of points, lines, planes, angles, triangles, congruence, similarity, solid figures, circles, Euclidean geometry also has applications in computer science, crystallography, and various branches of modern mathematics. Differential geometry uses techniques of calculus and linear algebra to problems in geometry. It has applications in physics, including in general relativity, topology is the field concerned with the properties of geometric objects that are unchanged by continuous mappings. In practice, this often means dealing with large-scale properties of spaces, convex geometry investigates convex shapes in the Euclidean space and its more abstract analogues, often using techniques of real analysis. It has close connections to convex analysis, optimization and functional analysis, algebraic geometry studies geometry through the use of multivariate polynomials and other algebraic techniques. It has applications in areas, including cryptography and string theory. Discrete geometry is concerned mainly with questions of relative position of simple objects, such as points. It shares many methods and principles with combinatorics, Geometry has applications to many fields, including art, architecture, physics, as well as to other branches of mathematics. The earliest recorded beginnings of geometry can be traced to ancient Mesopotamia, the earliest known texts on geometry are the Egyptian Rhind Papyrus and Moscow Papyrus, the Babylonian clay tablets such as Plimpton 322. For example, the Moscow Papyrus gives a formula for calculating the volume of a truncated pyramid, later clay tablets demonstrate that Babylonian astronomers implemented trapezoid procedures for computing Jupiters position and motion within time-velocity space
Geometry
–
Visual checking of the Pythagorean theorem for the (3, 4, 5) triangle as in the Chou Pei Suan Ching 500–200 BC.
Geometry
–
An illustration of Desargues' theorem, an important result in Euclidean and projective geometry
Geometry
–
Geometry lessons in the 20th century
Geometry
–
A European and an Arab practicing geometry in the 15th century.
8.
Astronomy
–
Astronomy is a natural science that studies celestial objects and phenomena. It applies mathematics, physics, and chemistry, in an effort to explain the origin of those objects and phenomena and their evolution. Objects of interest include planets, moons, stars, galaxies, and comets, while the phenomena include supernovae explosions, gamma ray bursts, more generally, all astronomical phenomena that originate outside Earths atmosphere are within the purview of astronomy. A related but distinct subject, physical cosmology, is concerned with the study of the Universe as a whole, Astronomy is the oldest of the natural sciences. The early civilizations in recorded history, such as the Babylonians, Greeks, Indians, Egyptians, Nubians, Iranians, Chinese, during the 20th century, the field of professional astronomy split into observational and theoretical branches. Observational astronomy is focused on acquiring data from observations of astronomical objects, theoretical astronomy is oriented toward the development of computer or analytical models to describe astronomical objects and phenomena. The two fields complement each other, with theoretical astronomy seeking to explain the results and observations being used to confirm theoretical results. Astronomy is one of the few sciences where amateurs can play an active role, especially in the discovery. Amateur astronomers have made and contributed to many important astronomical discoveries, Astronomy means law of the stars. Astronomy should not be confused with astrology, the system which claims that human affairs are correlated with the positions of celestial objects. Although the two share a common origin, they are now entirely distinct. Generally, either the term astronomy or astrophysics may be used to refer to this subject, however, since most modern astronomical research deals with subjects related to physics, modern astronomy could actually be called astrophysics. Few fields, such as astrometry, are purely astronomy rather than also astrophysics, some titles of the leading scientific journals in this field includeThe Astronomical Journal, The Astrophysical Journal and Astronomy and Astrophysics. In early times, astronomy only comprised the observation and predictions of the motions of objects visible to the naked eye, in some locations, early cultures assembled massive artifacts that possibly had some astronomical purpose. Before tools such as the telescope were invented, early study of the stars was conducted using the naked eye, most of early astronomy actually consisted of mapping the positions of the stars and planets, a science now referred to as astrometry. From these observations, early ideas about the motions of the planets were formed, and the nature of the Sun, Moon, the Earth was believed to be the center of the Universe with the Sun, the Moon and the stars rotating around it. This is known as the model of the Universe, or the Ptolemaic system. The Babylonians discovered that lunar eclipses recurred in a cycle known as a saros
Astronomy
–
A star -forming region in the Large Magellanic Cloud, an irregular galaxy.
Astronomy
–
A giant Hubble mosaic of the Crab Nebula, a supernova remnant
Astronomy
–
19th century Sydney Observatory, Australia (1873)
Astronomy
–
19th century Quito Astronomical Observatory is located 12 minutes south of the Equator in Quito, Ecuador.
9.
Astronomers
–
An astronomer is a scientist in the field of astronomy who concentrates their studies on a specific question or field outside of the scope of Earth. They look at stars, planets, moons, comets and galaxies, as well as other celestial objects — either in observational astronomy. Examples of topics or fields astronomers work on include, planetary science, solar astronomy, there are also related but distinct subjects like physical cosmology which studies the Universe as a whole. Astronomers usually fit into two types, Observational astronomers make direct observations of planets, stars and galaxies, and analyze the data, theoretical astronomers create and investigate models of things that cannot be observed. They use this data to create models or simulations to theorize how different celestial bodies work, there are further subcategories inside these two main branches of astronomy such as planetary astronomy, galactic astronomy or physical cosmology. Today, that distinction has disappeared and the terms astronomer. Professional astronomers are highly educated individuals who typically have a Ph. D. in physics or astronomy and are employed by research institutions or universities. They spend the majority of their time working on research, although quite often have other duties such as teaching, building instruments. The number of astronomers in the United States is actually quite small. The American Astronomical Society, which is the organization of professional astronomers in North America, has approximately 7,000 members. This number includes scientists from other such as physics, geology. The International Astronomical Union comprises almost 10,145 members from 70 different countries who are involved in research at the Ph. D. level. Before CCDs, photographic plates were a method of observation. Modern astronomers spend relatively little time at telescopes usually just a few weeks per year, analysis of observed phenomena, along with making predictions as to the causes of what they observe, takes the majority of observational astronomers time. Astronomers who serve as faculty spend much of their time teaching undergraduate and graduate classes, most universities also have outreach programs including public telescope time and sometimes planetariums as a public service to encourage interest in the field. Those who become astronomers usually have a background in maths, sciences. Taking courses that teach how to research, write and present papers are also invaluable, in college/university most astronomers get a Ph. D. in astronomy or physics. Keeping in mind how few there are it is understood that graduate schools in this field are very competitive
Astronomers
–
The Astronomer by Johannes Vermeer
Astronomers
–
Galileo is often referred to as the Father of modern astronomy
Astronomers
–
Guy Consolmagno (Vatikan observatory), analyzing a meteorite, 2014
Astronomers
–
Emily Lakdawalla at the Planetary Conference 2013
10.
Pure mathematics
–
Broadly speaking, pure mathematics is mathematics that studies entirely abstract concepts. Even though the pure and applied viewpoints are distinct philosophical positions, in there is much overlap in the activity of pure. To develop accurate models for describing the world, many applied mathematicians draw on tools. On the other hand, many pure mathematicians draw on natural and social phenomena as inspiration for their abstract research, ancient Greek mathematicians were among the earliest to make a distinction between pure and applied mathematics. Plato helped to create the gap between arithmetic, now called number theory, and logistic, now called arithmetic. Euclid of Alexandria, when asked by one of his students of what use was the study of geometry, the term itself is enshrined in the full title of the Sadleirian Chair, founded in the mid-nineteenth century. The idea of a discipline of pure mathematics may have emerged at that time. The generation of Gauss made no sweeping distinction of the kind, in the following years, specialisation and professionalisation started to make a rift more apparent. At the start of the twentieth century mathematicians took up the axiomatic method, in fact in an axiomatic setting rigorous adds nothing to the idea of proof. Pure mathematics, according to a view that can be ascribed to the Bourbaki group, is what is proved, Pure mathematician became a recognized vocation, achievable through training. One central concept in mathematics is the idea of generality. One can use generality to avoid duplication of effort, proving a general instead of having to prove separate cases independently. Generality can facilitate connections between different branches of mathematics, category theory is one area of mathematics dedicated to exploring this commonality of structure as it plays out in some areas of math. Generalitys impact on intuition is both dependent on the subject and a matter of preference or learning style. Often generality is seen as a hindrance to intuition, although it can function as an aid to it. Each of these branches of abstract mathematics have many sub-specialties. A steep rise in abstraction was seen mid 20th century, in practice, however, these developments led to a sharp divergence from physics, particularly from 1950 to 1983. Later this was criticised, for example by Vladimir Arnold, as too much Hilbert, the point does not yet seem to be settled, in that string theory pulls one way, while discrete mathematics pulls back towards proof as central
Pure mathematics
–
An illustration of the Banach–Tarski paradox, a famous result in pure mathematics. Although it is proven that it is possible to convert one sphere into two using nothing but cuts and rotations, the transformation involves objects that cannot exist in the physical world.
11.
Fourier transform
–
The Fourier transform decomposes a function of time into the frequencies that make it up, in a way similar to how a musical chord can be expressed as the frequencies of its constituent notes. The Fourier transform is called the frequency domain representation of the original signal, the term Fourier transform refers to both the frequency domain representation and the mathematical operation that associates the frequency domain representation to a function of time. The Fourier transform is not limited to functions of time, but in order to have a unified language, linear operations performed in one domain have corresponding operations in the other domain, which are sometimes easier to perform. The operation of differentiation in the domain corresponds to multiplication by the frequency. Also, convolution in the domain corresponds to ordinary multiplication in the frequency domain. Concretely, this means that any linear time-invariant system, such as a filter applied to a signal, after performing the desired operations, transformation of the result can be made back to the time domain. Functions that are localized in the domain have Fourier transforms that are spread out across the frequency domain and vice versa. The Fourier transform of a Gaussian function is another Gaussian function, Joseph Fourier introduced the transform in his study of heat transfer, where Gaussian functions appear as solutions of the heat equation. The Fourier transform can also be generalized to functions of variables on Euclidean space. In general, functions to which Fourier methods are applicable are complex-valued, the latter is routinely employed to handle periodic functions. The fast Fourier transform is an algorithm for computing the DFT, the Fourier transform of the function f is traditionally denoted by adding a circumflex, f ^. There are several conventions for defining the Fourier transform of an integrable function f, ℝ → ℂ. Here we will use the definition, f ^ = ∫ − ∞ ∞ f e −2 π i x ξ d x. When the independent variable x represents time, the transform variable ξ represents frequency. Under suitable conditions, f is determined by f ^ via the inverse transform, f = ∫ − ∞ ∞ f ^ e 2 π i ξ x d ξ, the functions f and f ^ often are referred to as a Fourier integral pair or Fourier transform pair. For other common conventions and notations, including using the angular frequency ω instead of the frequency ξ, see Other conventions, the Fourier transform on Euclidean space is treated separately, in which the variable x often represents position and ξ momentum. Many other characterizations of the Fourier transform exist, for example, one uses the Stone–von Neumann theorem, the Fourier transform is the unique unitary intertwiner for the symplectic and Euclidean Schrödinger representations of the Heisenberg group. In 1822, Joseph Fourier showed that some functions could be written as an sum of harmonics
Fourier transform
12.
Mechanical engineering
–
Mechanical engineering is the discipline that applies the principles of engineering, physics, and materials science for the design, analysis, manufacturing, and maintenance of mechanical systems. It is the branch of engineering that involves the design, production and it is one of the oldest and broadest of the engineering disciplines. The mechanical engineering field requires an understanding of areas including mechanics, kinematics, thermodynamics, materials science, structural analysis. Mechanical engineering emerged as a field during the Industrial Revolution in Europe in the 18th century, however, Mechanical engineering science emerged in the 19th century as a result of developments in the field of physics. The field has evolved to incorporate advancements in technology, and mechanical engineers today are pursuing developments in such fields as composites, mechatronics. Mechanical engineers may work in the field of biomedical engineering, specifically with biomechanics, transport phenomena, biomechatronics, bionanotechnology. Mechanical engineering finds its application in the archives of various ancient, in ancient Greece, the works of Archimedes deeply influenced mechanics in the Western tradition and Heron of Alexandria created the first steam engine. In China, Zhang Heng improved a water clock and invented a seismometer, during the 7th to 15th century, the era called the Islamic Golden Age, there were remarkable contributions from Muslim inventors in the field of mechanical technology. Al-Jazari, who was one of them, wrote his famous Book of Knowledge of Ingenious Mechanical Devices in 1206 and he is also considered to be the inventor of such mechanical devices which now form the very basic of mechanisms, such as the crankshaft and camshaft. Newton was reluctant to publish his methods and laws for years, gottfried Wilhelm Leibniz is also credited with creating Calculus during the same time frame. On the European continent, Johann von Zimmermann founded the first factory for grinding machines in Chemnitz, education in mechanical engineering has historically been based on a strong foundation in mathematics and science. Degrees in mechanical engineering are offered at universities worldwide. In Spain, Portugal and most of South America, where neither B. Sc. nor B. Tech, programs have been adopted, the formal name for the degree is Mechanical Engineer, and the course work is based on five or six years of training. In Italy the course work is based on five years of education, and training, in Greece, the coursework is based on a five-year curriculum and the requirement of a Diploma Thesis, which upon completion a Diploma is awarded rather than a B. Sc. In Australia, mechanical engineering degrees are awarded as Bachelor of Engineering or similar nomenclature although there are a number of specialisations. The degree takes four years of study to achieve. To ensure quality in engineering degrees, Engineers Australia accredits engineering degrees awarded by Australian universities in accordance with the global Washington Accord, before the degree can be awarded, the student must complete at least 3 months of on the job work experience in an engineering firm. Similar systems are present in South Africa and are overseen by the Engineering Council of South Africa
Mechanical engineering
–
Mechanical engineers design and build engines, power plants, other machines...
Mechanical engineering
–
... structures, and vehicles of all sizes.
Mechanical engineering
–
An oblique view of a four-cylinder inline crankshaft with pistons
Mechanical engineering
–
Training FMS with learning robot SCORBOT-ER 4u, workbench CNC Mill and CNC Lathe
13.
Surveying
–
Surveying or land surveying is the technique, profession, and science of determining the terrestrial or three-dimensional position of points and the distances and angles between them. A land surveying professional is called a land surveyor, Surveyors work with elements of geometry, trigonometry, regression analysis, physics, engineering, metrology, programming languages and the law. Surveying has been an element in the development of the environment since the beginning of recorded history. The planning and execution of most forms of construction require it and it is also used in transport, communications, mapping, and the definition of legal boundaries for land ownership. It is an important tool for research in other scientific disciplines. Basic surveyance has occurred since humans built the first large structures, the prehistoric monument at Stonehenge was set out by prehistoric surveyors using peg and rope geometry. In ancient Egypt, a rope stretcher would use simple geometry to re-establish boundaries after the floods of the Nile River. The almost perfect squareness and north-south orientation of the Great Pyramid of Giza, built c.2700 BC, the Groma instrument originated in Mesopotamia. The mathematician Liu Hui described ways of measuring distant objects in his work Haidao Suanjing or The Sea Island Mathematical Manual, the Romans recognized land surveyors as a profession. They established the basic measurements under which the Roman Empire was divided, Roman surveyors were known as Gromatici. In medieval Europe, beating the bounds maintained the boundaries of a village or parish and this was the practice of gathering a group of residents and walking around the parish or village to establish a communal memory of the boundaries. Young boys were included to ensure the memory lasted as long as possible, in England, William the Conqueror commissioned the Domesday Book in 1086. It recorded the names of all the owners, the area of land they owned, the quality of the land. It did not include maps showing exact locations, abel Foullon described a plane table in 1551, but it is thought that the instrument was in use earlier as his description is of a developed instrument. Gunters chain was introduced in 1620 by English mathematician Edmund Gunter and it enabled plots of land to be accurately surveyed and plotted for legal and commercial purposes. Leonard Digges described a Theodolite that measured horizontal angles in his book A geometric practice named Pantometria, joshua Habermel created a theodolite with a compass and tripod in 1576. Johnathon Sission was the first to incorporate a telescope on a theodolite in 1725, in the 18th century, modern techniques and instruments for surveying began to be used. Jesse Ramsden introduced the first precision theodolite in 1787 and it was an instrument for measuring angles in the horizontal and vertical planes
Surveying
–
A surveyor at work with an infrared reflector used for distance measurement.
Surveying
–
Table of Surveying, 1728 Cyclopaedia
Surveying
–
A map of India showing the Great Trigonometrical Survey, produced in 1870
Surveying
–
A German engineer surveying during the First World War, 1918
14.
Plane (geometry)
–
In mathematics, a plane is a flat, two-dimensional surface that extends infinitely far. A plane is the analogue of a point, a line. When working exclusively in two-dimensional Euclidean space, the article is used, so. Many fundamental tasks in mathematics, geometry, trigonometry, graph theory and graphing are performed in a space, or in other words. Euclid set forth the first great landmark of mathematical thought, a treatment of geometry. He selected a small core of undefined terms and postulates which he used to prove various geometrical statements. Although the plane in its sense is not directly given a definition anywhere in the Elements. In his work Euclid never makes use of numbers to measure length, angle, in this way the Euclidean plane is not quite the same as the Cartesian plane. This section is concerned with planes embedded in three dimensions, specifically, in R3. In a Euclidean space of any number of dimensions, a plane is determined by any of the following. A line and a point not on that line, a line is either parallel to a plane, intersects it at a single point, or is contained in the plane. Two distinct lines perpendicular to the plane must be parallel to each other. Two distinct planes perpendicular to the line must be parallel to each other. Specifically, let r0 be the vector of some point P0 =. The plane determined by the point P0 and the vector n consists of those points P, with position vector r, such that the vector drawn from P0 to P is perpendicular to n. Recalling that two vectors are perpendicular if and only if their dot product is zero, it follows that the plane can be described as the set of all points r such that n ⋅ =0. Expanded this becomes a + b + c =0, which is the form of the equation of a plane. This is just a linear equation a x + b y + c z + d =0 and this familiar equation for a plane is called the general form of the equation of the plane
Plane (geometry)
–
Vector description of a plane
Plane (geometry)
–
Two intersecting planes in three-dimensional space
15.
Right angle
–
In geometry and trigonometry, a right angle is an angle that bisects the angle formed by two adjacent parts of a straight line. More precisely, if a ray is placed so that its endpoint is on a line, as a rotation, a right angle corresponds to a quarter turn. The presence of an angle in a triangle is the defining factor for right triangles. The term is a calque of Latin angulus rectus, here rectus means upright, in Unicode, the symbol for a right angle is U+221F ∟ Right angle. It should not be confused with the similarly shaped symbol U+231E ⌞ Bottom left corner, related symbols are U+22BE ⊾ Right angle with arc, U+299C ⦜ Right angle variant with square, and U+299D ⦝ Measured right angle with dot. The symbol for an angle, an arc, with a dot, is used in some European countries, including German-speaking countries and Poland. Right angles are fundamental in Euclids Elements and they are defined in Book 1, definition 10, which also defines perpendicular lines. Euclid uses right angles in definitions 11 and 12 to define acute angles, two angles are called complementary if their sum is a right angle. Book 1 Postulate 4 states that all angles are equal. Euclids commentator Proclus gave a proof of this using the previous postulates. Saccheri gave a proof as well but using a more explicit assumption, in Hilberts axiomatization of geometry this statement is given as a theorem, but only after much groundwork. A right angle may be expressed in different units, 1/4 turn, 90° π/2 radians 100 grad 8 points 6 hours Throughout history carpenters and masons have known a quick way to confirm if an angle is a true right angle. It is based on the most widely known Pythagorean triple and so called the Rule of 3-4-5 and this measurement can be made quickly and without technical instruments. The geometric law behind the measurement is the Pythagorean theorem, Thales theorem states that an angle inscribed in a semicircle is a right angle. Two application examples in which the angle and the Thales theorem are included. Cartesian coordinate system Orthogonality Perpendicular Rectangle Types of angles Wentworth, G. A, Euclid, commentary and trans. by T. L. Heath Elements Vol.1 Google Books
Right angle
–
A right angle is equal to 90 degrees.
16.
Spherical trigonometry
–
Spherical trigonometry is of great importance for calculations in astronomy, geodesy and navigation. The origins of spherical trigonometry in Greek mathematics and the developments in Islamic mathematics are discussed fully in History of trigonometry. This book is now available on the web. The only significant developments since then have been the application of methods for the derivation of the theorems. A spherical polygon is a polygon on the surface of the sphere defined by a number of great-circle arcs, such polygons may have any number of sides. Two planes define a lune, also called a digon or bi-angle, the analogue of the triangle. Three planes define a triangle, the principal subject of this article. Four planes define a spherical quadrilateral, such a figure, and higher sided polygons, from this point the article will be restricted to spherical triangles, denoted simply as triangles. Both vertices and angles at the vertices are denoted by the upper case letters A, B and C. The angles of spherical triangles are less than π so that π < A + B + C < 3π. The sides are denoted by letters a, b, c. On the unit sphere their lengths are equal to the radian measure of the angles that the great circle arcs subtend at the centre. The sides of proper spherical triangles are less than π so that 0 < a + b + c < 3π, the radius of the sphere is taken as unity. For specific practical problems on a sphere of radius R the measured lengths of the sides must be divided by R before using the identities given below, likewise, after a calculation on the unit sphere the sides a, b, c must be multiplied by R. The polar triangle associated with a triangle ABC is defined as follows, consider the great circle that contains the side BC. This great circle is defined by the intersection of a plane with the surface. The points B and C are defined similarly, the triangle ABC is the polar triangle corresponding to triangle ABC. Therefore, if any identity is proved for the triangle ABC then we can derive a second identity by applying the first identity to the polar triangle by making the above substitutions
Spherical trigonometry
–
Eight spherical triangles defined by the intersection of three great circles.
17.
Sphere
–
A sphere is a perfectly round geometrical object in three-dimensional space that is the surface of a completely round ball. This distance r is the radius of the ball, and the point is the center of the mathematical ball. The longest straight line through the ball, connecting two points of the sphere, passes through the center and its length is twice the radius. While outside mathematics the terms sphere and ball are used interchangeably. The ball and the share the same radius, diameter. The surface area of a sphere is, A =4 π r 2, at any given radius r, the incremental volume equals the product of the surface area at radius r and the thickness of a shell, δ V ≈ A ⋅ δ r. The total volume is the summation of all volumes, V ≈ ∑ A ⋅ δ r. In the limit as δr approaches zero this equation becomes, V = ∫0 r A d r ′, substitute V,43 π r 3 = ∫0 r A d r ′. Differentiating both sides of equation with respect to r yields A as a function of r,4 π r 2 = A. Which is generally abbreviated as, A =4 π r 2, alternatively, the area element on the sphere is given in spherical coordinates by dA = r2 sin θ dθ dφ. In Cartesian coordinates, the element is d S = r r 2 − ∑ i ≠ k x i 2 ∏ i ≠ k d x i, ∀ k. For more generality, see area element, the total area can thus be obtained by integration, A = ∫02 π ∫0 π r 2 sin θ d θ d φ =4 π r 2. In three dimensions, the volume inside a sphere is derived to be V =43 π r 3 where r is the radius of the sphere, archimedes first derived this formula, which shows that the volume inside a sphere is 2/3 that of a circumscribed cylinder. In modern mathematics, this formula can be derived using integral calculus, at any given x, the incremental volume equals the product of the cross-sectional area of the disk at x and its thickness, δ V ≈ π y 2 ⋅ δ x. The total volume is the summation of all volumes, V ≈ ∑ π y 2 ⋅ δ x. In the limit as δx approaches zero this equation becomes, V = ∫ − r r π y 2 d x. At any given x, a right-angled triangle connects x, y and r to the origin, hence, applying the Pythagorean theorem yields, thus, substituting y with a function of x gives, V = ∫ − r r π d x. Which can now be evaluated as follows, V = π − r r = π − π =43 π r 3
Sphere
–
Circumscribed cylinder to a sphere
Sphere
–
A two-dimensional perspective projection of a sphere
Sphere
Sphere
–
Deck of playing cards illustrating engineering instruments, England, 1702. King of spades: Spheres
18.
Curvature
–
In mathematics, curvature is any of a number of loosely related concepts in different areas of geometry. This article deals primarily with extrinsic curvature and its canonical example is that of a circle, which has a curvature equal to the reciprocal of its radius everywhere. Smaller circles bend more sharply, and hence have higher curvature, the curvature of a smooth curve is defined as the curvature of its osculating circle at each point. Curvature is normally a scalar quantity, but one may define a curvature vector that takes into account the direction of the bend in addition to its magnitude. The curvature of more objects is described by more complex objects from linear algebra. This article sketches the mathematical framework which describes the curvature of a curve embedded in a plane, the curvature of C at a point is a measure of how sensitive its tangent line is to moving the point to other nearby points. There are a number of equivalent ways that this idea can be made precise and it is natural to define the curvature of a straight line to be constantly zero. The curvature of a circle of radius R should be large if R is small and small if R is large, thus the curvature of a circle is defined to be the reciprocal of the radius, κ =1 R. Given any curve C and a point P on it, there is a circle or line which most closely approximates the curve near P. The curvature of C at P is then defined to be the curvature of that circle or line, the radius of curvature is defined as the reciprocal of the curvature. Another way to understand the curvature is physical, suppose that a particle moves along the curve with unit speed. Taking the time s as the parameter for C, this provides a natural parametrization for the curve, the unit tangent vector T also depends on time. The curvature is then the magnitude of the rate of change of T. Symbolically and this is the magnitude of the acceleration of the particle and the vector dT/ds is the acceleration vector. Geometrically, the curvature κ measures how fast the unit tangent vector to the curve rotates. If a curve close to the same direction, the unit tangent vector changes very little and the curvature is small, where the curve undergoes a tight turn. These two approaches to the curvature are related geometrically by the following observation, in the first definition, the curvature of a circle is equal to the ratio of the angle of an arc to its length. e. For such a curve, there exists a reparametrization with respect to arc length s. This is a parametrization of C such that ∥ γ ′ ∥2 = x ′2 + y ′2 =1, the velocity vector T is the unit tangent vector
Curvature
19.
Elliptic geometry
–
Elliptic geometry has a variety of properties that differ from those of classical Euclidean plane geometry. For example, the sum of the angles of any triangle is always greater than 180°. In elliptic geometry, two lines perpendicular to a line must intersect. In fact, the perpendiculars on one side all intersect at the pole of the given line. There are no points in elliptic geometry. Every point corresponds to a polar line of which it is the absolute pole. Any point on this line forms an absolute conjugate pair with the pole. Such a pair of points is orthogonal, and the distance between them is a quadrant, the distance between a pair of points is proportional to the angle between their absolute polars. As explained by H. S. M. Coxeter The name elliptic is possibly misleading and it does not imply any direct connection with the curve called an ellipse, but only a rather far-fetched analogy. A central conic is called an ellipse or a hyperbola according as it has no asymptote or two asymptotes, analogously, a non-Euclidean plane is said to be elliptic or hyperbolic according as each of its lines contains no point at infinity or two points at infinity. A simple way to picture elliptic geometry is to look at a globe, neighboring lines of longitude appear to be parallel at the equator, yet they intersect at the poles. More precisely, the surface of a sphere is a model of elliptic geometry if lines are modeled by great circles, with this identification of antipodal points, the model satisfies Euclids first postulate, which states that two points uniquely determine a line. Metaphorically, we can imagine geometers who are like living on the surface of a sphere. Even if the ants are unable to move off the surface, they can still construct lines, the existence of a third dimension is irrelevant to the ants ability to do geometry, and its existence is neither verifiable nor necessary from their point of view. Another way of putting this is that the language of the axioms is incapable of expressing the distinction between one model and another. In Euclidean geometry, a figure can be scaled up or scaled down indefinitely, and the figures are similar, i. e. they have the same angles. In elliptic geometry this is not the case, for example, in the spherical model we can see that the distance between any two points must be strictly less than half the circumference of the sphere. A line segment therefore cannot be scaled up indefinitely, a geometer measuring the geometrical properties of the space he or she inhabits can detect, via measurements, that there is a certain distance scale that is a property of the space
Elliptic geometry
–
On a sphere, the sum of the angles of a triangle is not equal to 180°. The surface of a sphere is not a Euclidean space, but locally the laws of the Euclidean geometry are good approximations. In a small triangle on the face of the earth, the sum of the angles is very nearly 180°.
Elliptic geometry
–
Projecting a sphere to a plane.
20.
Navigation
–
Navigation is a field of study that focuses on the process of monitoring and controlling the movement of a craft or vehicle from one place to another. The field of navigation includes four categories, land navigation, marine navigation, aeronautic navigation. It is also the term of art used for the specialized knowledge used by navigators to perform navigation tasks, all navigational techniques involve locating the navigators position compared to known locations or patterns. Navigation, in a sense, can refer to any skill or study that involves the determination of position and direction. In this sense, navigation includes orienteering and pedestrian navigation, for information about different navigation strategies that people use, visit human navigation. In the European medieval period, navigation was considered part of the set of seven mechanical arts, early Pacific Polynesians used the motion of stars, weather, the position of certain wildlife species, or the size of waves to find the path from one island to another. Maritime navigation using scientific instruments such as the mariners astrolabe first occurred in the Mediterranean during the Middle Ages, the perfecting of this navigation instrument is attributed to Portuguese navigators during early Portuguese discoveries in the Age of Discovery. Open-seas navigation using the astrolabe and the compass started during the Age of Discovery in the 15th century, the Portuguese began systematically exploring the Atlantic coast of Africa from 1418, under the sponsorship of Prince Henry. In 1488 Bartolomeu Dias reached the Indian Ocean by this route, in 1492 the Spanish monarchs funded Christopher Columbuss expedition to sail west to reach the Indies by crossing the Atlantic, which resulted in the Discovery of America. In 1498, a Portuguese expedition commanded by Vasco da Gama reached India by sailing around Africa, soon, the Portuguese sailed further eastward, to the Spice Islands in 1512, landing in China one year later. The fleet of seven ships sailed from Sanlúcar de Barrameda in Southern Spain in 1519, crossed the Atlantic Ocean, some ships were lost, but the remaining fleet continued across the Pacific making a number of discoveries including Guam and the Philippines. By then, only two galleons were left from the original seven, the Victoria led by Elcano sailed across the Indian Ocean and north along the coast of Africa, to finally arrive in Spain in 1522, three years after its departure. The Trinidad sailed east from the Philippines, trying to find a path back to the Americas. He arrived in Acapulco on October 8,1565, the term stems from 1530s, from Latin navigationem, from navigatus, pp. of navigare to sail, sail over, go by sea, steer a ship, from navis ship and the root of agere to drive. Roughly, the latitude of a place on Earth is its angular distance north or south of the equator, latitude is usually expressed in degrees ranging from 0° at the Equator to 90° at the North and South poles. The height of Polaris in degrees above the horizon is the latitude of the observer, similar to latitude, the longitude of a place on Earth is the angular distance east or west of the prime meridian or Greenwich meridian. Longitude is usually expressed in degrees ranging from 0° at the Greenwich meridian to 180° east and west, sydney, for example, has a longitude of about 151° east. New York City has a longitude of 74° west, for most of history, mariners struggled to determine longitude
Navigation
–
Table of geography, hydrography, and navigation, from the 1728 Cyclopaedia
Navigation
–
Dead reckoning or DR, in which one advances a prior position using the ship's course and speed. The new position is called a DR position. It is generally accepted that only course and speed determine the DR position. Correcting the DR position for leeway, current effects, and steering error result in an estimated position or EP. An inertial navigator develops an extremely accurate EP.
Navigation
–
Pilotage involves navigating in restricted waters with frequent determination of position relative to geographic and hydrographic features.
Navigation
–
Celestial navigation involves reducing celestial measurements to lines of position using tables, spherical trigonometry, and almanacs.
21.
Hipparchus
–
Hipparchus of Nicaea was a Greek astronomer, geographer, and mathematician. He is considered the founder of trigonometry but is most famous for his discovery of precession of the equinoxes. Hipparchus was born in Nicaea, Bithynia, and probably died on the island of Rhodes and he is known to have been a working astronomer at least from 162 to 127 BC. Hipparchus is considered the greatest ancient astronomical observer and, by some and he was the first whose quantitative and accurate models for the motion of the Sun and Moon survive. For this he made use of the observations and perhaps the mathematical techniques accumulated over centuries by the Babylonians. He developed trigonometry and constructed trigonometric tables, and he solved problems of spherical trigonometry. With his solar and lunar theories and his trigonometry, he may have been the first to develop a method to predict solar eclipses. Relatively little of Hipparchuss direct work survives into modern times, although he wrote at least fourteen books, only his commentary on the popular astronomical poem by Aratus was preserved by later copyists. There is a tradition that Hipparchus was born in Nicaea, in the ancient district of Bithynia. His birth date was calculated by Delambre based on clues in his work, Hipparchus must have lived some time after 127 BC because he analyzed and published his observations from that year. Hipparchus obtained information from Alexandria as well as Babylon, but it is not known when or if he visited these places and he is believed to have died on the island of Rhodes, where he seems to have spent most of his later life. It is not known what Hipparchuss economic means were nor how he supported his scientific activities and his appearance is likewise unknown, there are no contemporary portraits. In the 2nd and 3rd centuries coins were made in his honour in Bithynia that bear his name and show him with a globe, this supports the tradition that he was born there. As an astronomer of antiquity his influence, supported by ideas from Aristotle, held sway for nearly 2000 years, Hipparchuss only preserved work is Τῶν Ἀράτου καὶ Εὐδόξου φαινομένων ἐξήγησις. This is a critical commentary in the form of two books on a popular poem by Aratus based on the work by Eudoxus. Hipparchus also made a list of his works, which apparently mentioned about fourteen books. His famous star catalog was incorporated into the one by Ptolemy, the first trigonometric table was apparently compiled by Hipparchus, who is now consequently known as the father of trigonometry. There are a variety of mis-steps in the more ambitious 2005 paper, According to one book review, both of these claims have been rejected by other scholars
Hipparchus
–
Hipparchus as he appears in " The School of Athens " by Raphael.
22.
Babylonians
–
Babylonia was an ancient Akkadian-speaking state and cultural area based in central-southern Mesopotamia. A small Amorite-ruled state emerged in 1894 BC, which contained at this time the city of Babylon. Babylon greatly expanded during the reign of Hammurabi in the first half of the 18th century BC, during the reign of Hammurabi and afterwards, Babylonia was called Māt Akkadī the country of Akkad in the Akkadian language. It was often involved in rivalry with its older fellow Akkadian-speaking state of Assyria in northern Mesopotamia and it retained the Sumerian language for religious use, but by the time Babylon was founded, this was no longer a spoken language, having been wholly subsumed by Akkadian. The earliest mention of the city of Babylon can be found in a tablet from the reign of Sargon of Akkad. During the 3rd millennium BC, a cultural symbiosis occurred between Sumerian and Akkadian-speakers, which included widespread bilingualism. The influence of Sumerian on Akkadian and vice versa is evident in all areas, from lexical borrowing on a scale, to syntactic, morphological. This has prompted scholars to refer to Sumerian and Akkadian in the millennium as a sprachbund. Traditionally, the religious center of all Mesopotamia was the city of Nippur. The empire eventually disintegrated due to decline, climate change and civil war. Sumer rose up again with the Third Dynasty of Ur in the late 22nd century BC and they also seem to have gained ascendancy over most of the territory of the Akkadian kings of Assyria in northern Mesopotamia for a time. The states of the south were unable to stem the Amorite advance, King Ilu-shuma of the Old Assyrian Empire in a known inscription describes his exploits to the south as follows, The freedom of the Akkadians and their children I established. I established their freedom from the border of the marshes and Ur and Nippur, Awal, past scholars originally extrapolated from this text that it means he defeated the invading Amorites to the south, but there is no explicit record of that. More recently, the text has been taken to mean that Asshur supplied the south with copper from Anatolia and these policies were continued by his successors Erishum I and Ikunum. During the first centuries of what is called the Amorite period and his reign was concerned with establishing statehood amongst a sea of other minor city states and kingdoms in the region. However Sumuabum appears never to have bothered to give himself the title of King of Babylon, suggesting that Babylon itself was only a minor town or city. He was followed by Sumu-la-El, Sabium, Apil-Sin, each of whom ruled in the same manner as Sumuabum. Sin-Muballit was the first of these Amorite rulers to be regarded officially as a king of Babylon, the Elamites occupied huge swathes of southern Mesopotamia, and the early Amorite rulers were largely held in vassalage to Elam
Babylonians
–
Old Babylonian Cylinder Seal, hematite, The king makes an animal offering to Shamash. This seal was probably made in a workshop at Sippar.
Babylonians
–
Geography
23.
Chord (geometry)
–
A chord of a circle is a straight line segment whose endpoints both lie on the circle. A secant line, or just secant, is the line extension of a chord. More generally, a chord is a line segment joining two points on any curve, for instance an ellipse, a chord that passes through a circles center point is the circles diameter. Every diameter is a chord, but not every chord is a diameter, the word chord is from the Latin chorda meaning bowstring. Among properties of chords of a circle are the following, Chords are equidistant from the center if, a chord that passes through the center of a circle is called a diameter, and is the longest chord. If the line extensions of chords AB and CD intersect at a point P, the area that a circular chord cuts off is called a circular segment. The midpoints of a set of chords of an ellipse are collinear. Chords were used extensively in the development of trigonometry. The first known trigonometric table, compiled by Hipparchus, tabulated the value of the function for every 7.5 degrees. The circle was of diameter 120, and the lengths are accurate to two base-60 digits after the integer part. The chord function is defined geometrically as shown in the picture, the chord of an angle is the length of the chord between two points on a unit circle separated by that angle. The last step uses the half-angle formula, much as modern trigonometry is built on the sine function, ancient trigonometry was built on the chord function. Hipparchus is purported to have written a twelve volume work on chords, all now lost, so presumably a great deal was known about them
Chord (geometry)
–
The red segment BX is a chord (as is the diameter segment AB).
24.
Inscribed angle
–
In geometry, an inscribed angle is the angle formed in the interior of a circle when two secant lines intersect on the circle. It can also be defined as the angle subtended at a point on the circle by two points on the circle Equivalently, an inscribed angle is defined by two chords of the circle sharing an endpoint. The inscribed angle theorem relates the measure of an angle to that of the central angle subtending the same arc. The inscribed angle theorem states that an angle θ inscribed in a circle is half of the central angle 2θ that subtends the arc on the circle. Therefore, the angle does not change as its vertex is moved to different positions on the circle, let O be the center of a circle, as in the diagram at right. Choose two points on the circle, and call them V and A, draw line VO and extended past O so that it intersects the circle at point B which is diametrically opposite the point V. Draw an angle whose vertex is point V and whose sides pass through points A and B, Angle BOA is a central angle, call it θ. Lines OV and OA are both radii of the circle, so they have equal lengths, therefore, triangle VOA is isosceles, so angle BVA and angle VAO are equal, let each of them be denoted as ψ. Angles BOA and AOV are supplementary and they add up to 180°, since line VB passing through O is a straight line. Therefore, angle AOV measures 180° − θ and it is known that the three angles of a triangle add up to 180°, and the three angles of triangle VOA are, 180° − θ ψ ψ. Therefore,2 ψ +180 ∘ − θ =180 ∘, subtract 180° from both sides,2 ψ = θ, where θ is the central angle subtending arc AB and ψ is the inscribed angle subtending arc AB. Given a circle whose center is point O, choose three points V, C, and D on the circle, draw lines VC and VD, angle DVC is an inscribed angle. Now draw line VO and extend it past point O so that it intersects the circle at point E. Angle DVC subtends arc DC on the circle, suppose this arc includes point E within it. Point E is diametrically opposite to point V, angles DVE and EVC are also inscribed angles, but both of these angles have one side which passes through the center of the circle, therefore the theorem from the above Part 1 can be applied to them. Angle DOC is an angle, but so are angles DOE and EOC. Let θ0 = ∠ D O C, θ1 = ∠ D O E, θ2 = ∠ E O C, from Part One we know that θ1 =2 ψ1 and that θ2 =2 ψ2. Combining these results with equation yields θ0 =2 ψ1 +2 ψ2 therefore, by equation, θ0 =2 ψ0. The previous case can be extended to cover the case where the measure of the angle is the difference between two inscribed angles as discussed in the first part of this proof
Inscribed angle
–
The inscribed angle θ is half of the central angle 2 θ that subtends the same arc on the circle (magenta). Thus, the angle θ does not change as its vertex is moved around on the circle (green, blue and gold angles).
25.
Ptolemy
–
Claudius Ptolemy was a Greek writer, known as a mathematician, astronomer, geographer, astrologer, and poet of a single epigram in the Greek Anthology. He lived in the city of Alexandria in the Roman province of Egypt, wrote in Koine Greek, beyond that, few reliable details of his life are known. His birthplace has been given as Ptolemais Hermiou in the Thebaid in a statement by the 14th-century astronomer Theodore Meliteniotes. This is a very late attestation, however, and there is no reason to suppose that he ever lived elsewhere than Alexandria. Ptolemy wrote several treatises, three of which were of importance to later Byzantine, Islamic and European science. The first is the astronomical treatise now known as the Almagest, although it was entitled the Mathematical Treatise. The second is the Geography, which is a discussion of the geographic knowledge of the Greco-Roman world. The third is the treatise in which he attempted to adapt horoscopic astrology to the Aristotelian natural philosophy of his day. This is sometimes known as the Apotelesmatika but more known as the Tetrabiblos from the Greek meaning Four Books or by the Latin Quadripartitum. The name Claudius is a Roman nomen, the fact that Ptolemy bore it indicates he lived under the Roman rule of Egypt with the privileges and political rights of Roman citizenship. It would have suited custom if the first of Ptolemys family to become a citizen took the nomen from a Roman called Claudius who was responsible for granting citizenship, if, as was common, this was the emperor, citizenship would have been granted between AD41 and 68. The astronomer would also have had a praenomen, which remains unknown and it occurs once in Greek mythology, and is of Homeric form. All the kings after him, until Egypt became a Roman province in 30 BC, were also Ptolemies, abu Mashar recorded a belief that a different member of this royal line composed the book on astrology and attributed it to Ptolemy. The correct answer is not known”, Ptolemy wrote in Greek and can be shown to have utilized Babylonian astronomical data. He was a Roman citizen, but most scholars conclude that Ptolemy was ethnically Greek and he was often known in later Arabic sources as the Upper Egyptian, suggesting he may have had origins in southern Egypt. Later Arabic astronomers, geographers and physicists referred to him by his name in Arabic, Ptolemys Almagest is the only surviving comprehensive ancient treatise on astronomy. Ptolemy presented his models in convenient tables, which could be used to compute the future or past position of the planets. The Almagest also contains a catalogue, which is a version of a catalogue created by Hipparchus
Ptolemy
–
Engraving of a crowned Ptolemy being guided by the muse Astronomy, from Margarita Philosophica by Gregor Reisch, 1508. Although Abu Ma'shar believed Ptolemy to be one of the Ptolemies who ruled Egypt after the conquest of Alexander the title ‘King Ptolemy’ is generally viewed as a mark of respect for Ptolemy's elevated standing in science.
Ptolemy
–
Early Baroque artist's rendition
Ptolemy
–
A 15th-century manuscript copy of the Ptolemy world map, reconstituted from Ptolemy's Geography (circa 150), indicating the countries of " Serica " and "Sinae" (China) at the extreme east, beyond the island of "Taprobane" (Sri Lanka, oversized) and the "Aurea Chersonesus" (Malay Peninsula).
Ptolemy
–
Prima Europe tabula. A C15th copy of Ptolemy's map of Britain
26.
Islamic Golden Age
–
This period is traditionally said to have ended with the collapse of the Abbasid caliphate due to Mongol invasions and the Sack of Baghdad in 1258 AD. A few contemporary scholars place the end of the Islamic Golden Age as late as the end of 15th to 16th centuries, the metaphor of a golden age began to be applied in 19th-century literature about Islamic history, in the context of the western aesthetic fashion known as Orientalism. There is no definition of term, and depending on whether it is used with a focus on cultural or on military achievement. During the early 20th century, the term was used only occasionally, the Muslim government heavily patronized scholars. The money spent on the Translation Movement for some translations is estimated to be equivalent to twice the annual research budget of the United Kingdom’s Medical Research Council. The best scholars and notable translators, such as Hunayn ibn Ishaq, had salaries that are estimated to be the equivalent of professional athletes today, the House of Wisdom was a library established in Abbasid-era Baghdad, Iraq by Caliph al-Mansur. During this period, the Muslims showed a strong interest in assimilating the knowledge of the civilizations that had been conquered. They also excelled in fields, in particular philosophy, science. For a long period of time the personal physicians of the Abbasid Caliphs were often Assyrian Christians, among the most prominent Christian families to serve as physicians to the caliphs were the Bukhtishu dynasty. Throughout the 4th to 7th centuries, Christian scholarly work in the Greek, the House of Wisdom was founded in Baghdad in 825, modelled after the Academy of Gondishapur. It was led by Christian physician Hunayn ibn Ishaq, with the support of Byzantine medicine, many of the most important philosophical and scientific works of the ancient world were translated, including the work of Galen, Hippocrates, Plato, Aristotle, Ptolemy and Archimedes. Many scholars of the House of Wisdom were of Christian background, the use of paper spread from China into Muslim regions in the eighth century, arriving in Al-Andalus on the Iberian peninsula, present-day Spain in the 10th century. It was easier to manufacture than parchment, less likely to crack than papyrus, Islamic paper makers devised assembly-line methods of hand-copying manuscripts to turn out editions far larger than any available in Europe for centuries. It was from countries that the rest of the world learned to make paper from linen. Ibn Rushd and Ibn Sina played a role in saving the works of Aristotle, whose ideas came to dominate the non-religious thought of the Christian. Ibn Sina and other such as al-Kindi and al-Farabi combined Aristotelianism and Neoplatonism with other ideas introduced through Islam. Arabic philosophic literature was translated into Latin and Ladino, contributing to the development of modern European philosophy, during this period, non-Muslims were allowed to flourish relative to treatment of religious minorities in the Christian Byzantine Empire. The Jewish philosopher Moses Maimonides, who lived in Andalusia, is an example, in epistemology, Ibn Tufail wrote the novel Hayy ibn Yaqdhan and in response Ibn al-Nafis wrote the novel Theologus Autodidactus
Islamic Golden Age
–
Scholars at an Abbasid library. Maqamat of al-Hariri Illustration by Yahyá al-Wasiti, Baghdad 1237
Islamic Golden Age
–
A manuscript written on paper during the Abbasid Era.
Islamic Golden Age
–
Islamic architecture in Alhambra, Al-Andalus, in modern-day Spain
Islamic Golden Age
–
The eye, according to Hunain ibn Ishaq. From a manuscript dated circa 1200.
27.
Surya Siddhanta
–
The Surya Siddhanta is the name of multiple treatises in Indian astronomy. The extant text as translated by Burgess is medieval, but it is based on older versions. It has rules laid down to determine the true motions of the luminaries and it gives the locations of several stars other than the lunar nakshatras and treats the calculation of solar eclipses as well as solstices, e. g. summer solstice 21/06. Significant coverage is on kinds of time, length of the year of devas and asuras, day and night of Brahma, the Earths diameter and circumference are also given. Eclipses and color of the portion of the moon are mentioned. Judging from the dates in the work, Plofker suggests that this Sūrya-siddhānta was composed or revised in the early sixth century. Utpala, a 10th-century commentator of Varahamihira, quotes six shlokas of the Surya Siddhanta of his day, the present version was modified by Bhaskaracharya during the Middle Ages. It is partly based on Vedanga Jyotisha, which itself might reflect traditions going back to the Indian Iron Age and it is hypothesized that there were cultural contacts between the Indian and Greek astronomers via cultural contact with Hellenistic Greece, specifically the work of Hipparchus. There were many similarities between Suryasiddhanta and Greek astronomy in Hellenistic period, for example, Suryasiddhanta provides more accurate and detailed table of sines than Hipparchus. However, the model of Suryasiddhanta was simpler than that made by Ptolemy in the 2nd century. The astronomical time cycles contained in the text were remarkably accurate at the time, the Hindu Time Cycles, copied from an earlier work, are described in verses 11–23 of Chapter 1,11. That which begins with respirations is called real, six respirations make a vinadi, sixty of these a nadi,12. And sixty nadis make a day and night. Of thirty of these days is composed a month, a civil month consists of as many sunrises,13. A lunar month, of as many days, a solar month is determined by the entrance of the sun into a sign of the zodiac. This is called a day of the gods, the day and night of the gods and of the demons are mutually opposed to one another. Six times sixty of them are a year of the gods, twelve thousand of these divine years are denominated a caturyuga, of ten thousand times four hundred and thirty-two solar years 16. Is composed that caturyuga, with its dawn and twilight, the difference of the krtayuga and the other yugas, as measured by the difference in the number of the feet of Virtue in each, is as follows,17
Surya Siddhanta
28.
Western Europe
–
Western Europe, or West Europe, is the region comprising the western part of Europe. Below, some different geographic and geopolitical definitions of the term are outlined, prior to the Roman conquest, a large part of Western Europe had adopted the newly developed La Tène culture. This cultural and linguistic division was reinforced by the later political east-west division of the Roman Empire. The division between these two was enhanced during Late Antiquity and the Middle Ages by a number of events, the Western Roman Empire collapsed, starting the Early Middle Ages. By contrast, the Eastern Roman Empire, mostly known as the Greek or Byzantine Empire, survived, in East Asia, Western Europe was historically known as taixi in China and taisei in Japan, which literally translates as the Far West. The term Far West became synonymous with Western Europe in China during the Ming dynasty, the Italian Jesuit priest Matteo Ricci was one of the first writers in China to use the Far West as an Asian counterpart to the European concept of the Far East. In his writings, Ricci referred to himself as Matteo of the Far West, the term was still in use in the late 19th and early 20th centuries. Post-war Europe would be divided into two spheres, the West, influenced by the United States, and the Eastern Bloc. With the onset of the Cold War, Europe was divided by the Iron Curtain, behind that line lie all the capitals of the ancient states of Central and Eastern Europe. Although some countries were neutral, they were classified according to the nature of their political. This division largely defined the popular perception and understanding of Western Europe, the world changed dramatically with the fall of the Iron Curtain in 1989. The Federal Republic of Germany peacefully absorbed the German Democratic Republic, COMECON and the Warsaw Pact were dissolved, and in 1991, the Soviet Union ceased to exist. Several countries which had part of the Soviet Union regained full independence. Although the term Western Europe was more prominent during the Cold War, it remains much in use, in 1948 the Treaty of Brussels was signed between Belgium, France, Luxembourg, the Netherlands and the United Kingdom. It was further revisited in 1954 at the Paris Conference, when the Western European Union was established and it was declared defunct in 2011, after the Treaty of Lisbon, and the Treaty of Brussels was terminated. When the Western European Union was dissolved, it had 10 member countries, six member countries, five observer countries. The CIA divides Western Europe into two smaller subregions, regional voting blocs were formed in 1961 to encourage voting to various UN bodies from different regional groups. The European Union is an economic and political union of 28 member states that are located primarily in Europe, some Western and Northern European countries of Iceland, Norway, Switzerland and Liechtenstein are members of EFTA, though cooperating to varying degree with the European Union
Western Europe
–
The Great Schism in Christianity, the predominant religion in Western Europe at the time.
Western Europe
Western Europe
–
Geopolitical Occident of Europe
29.
Latin translations of the 12th century
–
These areas had been under a Muslim rule for considerable time, and still had substantial Arabic-speaking populations to support their search. While Muslims were busy translating and adding their own ideas to Greek philosophies, St. Jerome, for example, was hostile to Aristotle, and St. Augustine had little interest in exploring philosophy, only applying logic to theology. For centuries, Greek ideas in Europe were all but non-existent, only a few monasteries had Greek works, and even fewer of them copied these works. There was a period of revival, when the Anglo-Saxon monk Alcuin. After Charlemagnes death, however, intellectual life again fell into decline, excepting a few persons promoting Boethius, such as Gerbert of Aurillac, philosophical thought was developed little in Europe for about two centuries. By the 12th century, however, scholastic thought was beginning to develop and these universities gathered what little Greek thought had been preserved over the centuries, including Boethius commentaries on Aristotle. They also served as places of discussion for new ideas coming from new translations from Arabic throughout Europe, by the 12th century, European fear of Islam as a military threat had lessened somewhat. Toledo, in Spain, had fallen from Arab hands in 1085, Sicily in 1091 and these linguistic borderlands proved fertile ground for translators. These areas had been conquered by Arab, Greek and Latin-speaking peoples over the centuries, the small and unscholarly population of the Crusader Kingdoms contributed very little to the translation efforts, until the Fourth Crusade took most of the Byzantine Empire. Sicily, still largely Greek-speaking, was productive, it had seen rule under Byzantines, Arabs, and Italians, and many were fluent in Greek, Arabic. Sicilians, however, were influenced by Arabs and instead are noted more for their translations directly from Greek to Latin. Spain, on the hand, was an ideal place for translation from Arabic to Latin because of a combination of rich Latin. In addition, some Arabic literature was translated into Latin. Sicily had been part of the Byzantine Empire until 878, was under Muslim control from 878–1060, as a consequence the Norman Kingdom of Sicily maintained a trilingual bureaucracy, which made it an ideal place for translations. Sicily also maintained relations with the Greek East, which allowed for exchange of ideas, a copy of Ptolemys Almagest was brought back to Sicily by Henry Aristippus, as a gift from the Emperor to King William I. Although the Sicilians generally translated directly from the Greek, when Greek texts were not available, admiral Eugene of Sicily translated Ptolemys Optics into Latin, drawing on his knowledge of all three languages in the task. Accursius of Pistojas translations included the works of Galen and Hunayn ibn Ishaq, Gerard de Sabloneta translated Avicennas The Canon of Medicine and al-Razis Almansor. Fibonacci presented the first complete European account of the Hindu-Arabic numeral system from Arabic sources in his Liber Abaci, the Aphorismi by Masawaiyh was translated by an anonymous translator in late 11th or early 12th century Italy
Latin translations of the 12th century
–
Albohali 's De Iudiciis Natiuitatum was translated into Latin by Plato of Tivoli in 1136, and again by John of Seville in 1153. Here is the Nuremberg edition of John of Seville 's translation, 1546.
Latin translations of the 12th century
–
Ibn Butlan 's Tacuinum sanitatis, Rhineland, 2nd half of the 15th century.
Latin translations of the 12th century
–
King Alfonso X (the Wise)
Latin translations of the 12th century
–
Al-Razi 's Recueil des traités de médecine translated by Gerard of Cremona, second half of the 13th century.
30.
Nasir al-Din al-Tusi
–
Khawaja Muhammad ibn Muhammad ibn al-Hasan al-Tūsī, better known as Nasīr al-Dīn Tūsī, was a Persian polymath, architect, philosopher, physician, scientist, theologian and Marja Taqleed. He was of the Twelver Shī‘ah Islamic belief, the Muslim scholar Ibn Khaldun considered Tusi to be the greatest of the later Persian scholars. Nasir al-Din Tusi was born in the city of Tus in medieval Khorasan in the year 1201, in Hamadan and Tus he studied the Quran, Hadith, Shia jurisprudence, logic, philosophy, mathematics, medicine and astronomy. He was apparently born into a Shī‘ah family and lost his father at a young age, at a young age he moved to Nishapur to study philosophy under Farid al-Din Damad and mathematics under Muhammad Hasib. He met also Farid al-Din Attar, the legendary Sufi master who was killed by Mongol invaders. In Mosul he studied mathematics and astronomy with Kamal al-Din Yunus and he was captured after the invasion of the Alamut castle by the Mongol forces. Tusi has about 150 works, of which 25 are in Persian and the remaining are in Arabic, here are some of his major works, Kitāb al-Shakl al-qattāʴ Book on the complete quadrilateral. A five volume summary of trigonometry, al-Tadhkirah fiilm al-hayah – A memoir on the science of astronomy. Many commentaries were written about this work called Sharh al-Tadhkirah - Commentaries were written by Abd al-Ali ibn Muhammad ibn al-Husayn al-Birjandi, akhlaq-i Nasiri – A work on ethics. Al-Risalah al-Asturlabiyah – A Treatise on astrolabe, Zij-i ilkhani – A major astronomical treatise, completed in 1272. Sharh al-isharat Awsaf al-Ashraf a short work in Persian Tajrīd al-iʿtiqād – A commentary on Shia doctrines. During his stay in Nishapur, Tusi established a reputation as an exceptional scholar, tusi’s prose writing, which number over 150 works, represent one of the largest collections by a single Islamic author. Writing in both Arabic and Persian, Nasir al-Din Tusi dealt with religious topics and non-religious or secular subjects. His works include the definitive Arabic versions of the works of Euclid, Archimedes, Ptolemy, Autolycus, Tusi convinced Hulegu Khan to construct an observatory for establishing accurate astronomical tables for better astrological predictions. Beginning in 1259, the Rasad Khaneh observatory was constructed in Azarbaijan, south of the river Aras, and to the west of Maragheh, the capital of the Ilkhanate Empire. Based on the observations in this for the time being most advanced observatory and this book contains astronomical tables for calculating the positions of the planets and the names of the stars. His model for the system is believed to be the most advanced of his time. Between Ptolemy and Copernicus, he is considered by many to be one of the most eminent astronomers of his time, for his planetary models, he invented a geometrical technique called a Tusi-couple, which generates linear motion from the sum of two circular motions
Nasir al-Din al-Tusi
–
Persian Muslim scholar Nasīr al-Dīn Tūsī
Nasir al-Din al-Tusi
–
A Treatise on Astrolabe by Tusi, Isfahan 1505
Nasir al-Din al-Tusi
–
Tusi couple from Vat. Arabic ms 319
Nasir al-Din al-Tusi
–
The Astronomical Observatory of Nasir al- Dīn Tusi.
31.
Germany
–
Germany, officially the Federal Republic of Germany, is a federal parliamentary republic in central-western Europe. It includes 16 constituent states, covers an area of 357,021 square kilometres, with about 82 million inhabitants, Germany is the most populous member state of the European Union. After the United States, it is the second most popular destination in the world. Germanys capital and largest metropolis is Berlin, while its largest conurbation is the Ruhr, other major cities include Hamburg, Munich, Cologne, Frankfurt, Stuttgart, Düsseldorf and Leipzig. Various Germanic tribes have inhabited the northern parts of modern Germany since classical antiquity, a region named Germania was documented before 100 AD. During the Migration Period the Germanic tribes expanded southward, beginning in the 10th century, German territories formed a central part of the Holy Roman Empire. During the 16th century, northern German regions became the centre of the Protestant Reformation, in 1871, Germany became a nation state when most of the German states unified into the Prussian-dominated German Empire. After World War I and the German Revolution of 1918–1919, the Empire was replaced by the parliamentary Weimar Republic, the establishment of the national socialist dictatorship in 1933 led to World War II and the Holocaust. After a period of Allied occupation, two German states were founded, the Federal Republic of Germany and the German Democratic Republic, in 1990, the country was reunified. In the 21st century, Germany is a power and has the worlds fourth-largest economy by nominal GDP. As a global leader in industrial and technological sectors, it is both the worlds third-largest exporter and importer of goods. Germany is a country with a very high standard of living sustained by a skilled. It upholds a social security and universal health system, environmental protection. Germany was a member of the European Economic Community in 1957. It is part of the Schengen Area, and became a co-founder of the Eurozone in 1999, Germany is a member of the United Nations, NATO, the G8, the G20, and the OECD. The national military expenditure is the 9th highest in the world, the English word Germany derives from the Latin Germania, which came into use after Julius Caesar adopted it for the peoples east of the Rhine. This in turn descends from Proto-Germanic *þiudiskaz popular, derived from *þeudō, descended from Proto-Indo-European *tewtéh₂- people, the discovery of the Mauer 1 mandible shows that ancient humans were present in Germany at least 600,000 years ago. The oldest complete hunting weapons found anywhere in the world were discovered in a mine in Schöningen where three 380, 000-year-old wooden javelins were unearthed
Germany
–
The Nebra sky disk is dated to c. 1600 BC.
Germany
–
Flag
Germany
–
Martin Luther (1483–1546) initiated the Protestant Reformation.
Germany
–
Foundation of the German Empire in Versailles, 1871. Bismarck is at the center in a white uniform.
32.
Byzantine scholars in Renaissance
–
These emigres were grammarians, humanists, poets, writers, printers, lecturers, musicians, astronomers, architects, academics, artists, scribes, philosophers, scientists, politicians and theologians. They brought to Western Europe the relatively well-preserved remnants and accumulated knowledge of their own civilization and their main role within the Renaissance humanism was the teaching of the Greek language to their western counterparts in universities or privately together with the spread of ancient texts. Their forerunners were Barlaam of Calabria and Leonzio Pilato, both drawn from culturally Byzantine Calabria in southern Italy, the impact of these two scholars on the very first Renaissance humanists was indisputable. These young men had to study the sciences, in order to spread later sacred and profane learning among their fellow-countrymen. The construction of the College and Church of S. Atanasio, the same year the first students arrived, and until the completion of the college were housed elsewhere. Crete was especially notable for the Cretan School of icon-painting, which after 1453 became the most important in the Greek world, while Greek learning affected all the subjects of the studia humanitatis, history and philosophy in particular were profoundly affected by the texts and ideas brought from Byzantium. The effects of this knowledge of Greek history can be seen in the writings of humanists on virtue. Specifically, these effects are shown in the examples provided from Greek antiquity that displayed virtue as well as vice, the flourishing of philosophical writings in the 15th century revealed the impact of Greek philosophy and science on the Renaissance. Geanakoplos, Byzantine East and Latin West, Two worlds of Christendom in Middle Ages, the Academy Library Harper & Row Publishers, New York,1966. Deno J. Geanakoplos, A Byzantine looks at the renaissance, Greek, Roman and Byzantine Studies 1, pp, jonathan Harris, Greek Émigrés in the West, 1400-1520, Camberley, Porphyrogenitus,1995. Louise Ropes Loomis The Greek Renaissance in Italy The American Historical Review,13, pp, john Monfasani Byzantine Scholars in Renaissance Italy, Cardinal Bessarion and Other Émigrés, Selected Essays, Aldershot, Hampshire, Variorum,1995. Steven Runciman, The fall of Constantinople,1453, fotis Vassileiou & Barbara Saribalidou, Short Biographical Lexicon of Byzantine Academics Immigrants to Western Europe,2007. Dimitri Tselos A Greco-Italian School of Illuminators and Fresco Painters, Its Relation to the Principal Reims Nigel G. Wilson, from Byzantium to Italy, Greek Studies in the Italian Renaissance. Baltimore, Johns Hopkins University Press,1992, michael D. Reeve, On the role of Greek in Renaissance scholarship. Jonathan Harris, Byzantines in Renaissance Italy, bilingual excerpts from Gennadios Scholarios Epistle to Orators. Paul Botley, Renaissance Scholarship and the Athenian Calendar, karl Krumbacher, The History of Byzantine Literature, from Justinian to the end of the Eastern Roman Empire. San Giorgio dei Greci and the Greek community of Venice Istituto Ellenico di Studi Byzantini and Postbyzantini di Venezia
Byzantine scholars in Renaissance
–
Demetrius Chalcondyles (brother of Laonikos Chalkokondyles) (1424–1511) was a Greek Renaissance scholar, Humanist and teacher of Greek and Platonic philosophy.
Byzantine scholars in Renaissance
–
John Argyropoulos (1415–1487) was a Greek Renaissance scholar who played a prominent role in the revival of Greek philosophy in Italy.
Byzantine scholars in Renaissance
–
One of Georgius Gemistus (Plethon) 's manuscripts, in Greek, written in the early 15th century.
Byzantine scholars in Renaissance
–
Manuel Chrysoloras.
33.
Bartholomaeus Pitiscus
–
Bartholomaeus Pitiscus was a 16th-century German trigonometrist, astronomer and theologian who first coined the word trigonometry. Pitiscus was born to parents in Grünberg in Lower Silesia, nowadays in Poland. He studied theology in Zerbst and Heidelberg, a Calvinist, he was appointed to teach the ten-year-old Frederick IV, Elector Palatine of the Rhine, by Fredericks Calvinist uncle Johann Casimir of Simmern, as Fredericks father had died in 1583. Pitiscus was subsequently appointed court chaplain at Breslau and court preacher to Frederick, Pitiscus supported Fredericks subsequent measures against the Roman Catholic Church. It consists of five books on plane and spherical trigonometry, Pitiscus edited Thesaurus mathematicus in which he improved the trigonometric tables of Georg Joachim Rheticus and also corrected Rheticus’s Magnus Canon doctrinæ triangulorum. The lunar crater Pitiscus is named after him, the classical scholar Samuel Pitiscus was his nephew. Verlag Harri Thun, Frankfurt a. M.1990 ISBN 3-8171-1164-9 OConnor, John J. Robertson, Edmund F. Bartholomaeus Pitiscus, MacTutor History of Mathematics archive, University of St Andrews
Bartholomaeus Pitiscus
–
Literature [edit]
34.
Leonhard Euler
–
He also introduced much of the modern mathematical terminology and notation, particularly for mathematical analysis, such as the notion of a mathematical function. He is also known for his work in mechanics, fluid dynamics, optics, astronomy, Euler was one of the most eminent mathematicians of the 18th century, and is held to be one of the greatest in history. He is also considered to be the most prolific mathematician of all time. His collected works fill 60 to 80 quarto volumes, more than anybody in the field and he spent most of his adult life in Saint Petersburg, Russia, and in Berlin, then the capital of Prussia. A statement attributed to Pierre-Simon Laplace expresses Eulers influence on mathematics, Read Euler, read Euler, Leonhard Euler was born on 15 April 1707, in Basel, Switzerland to Paul III Euler, a pastor of the Reformed Church, and Marguerite née Brucker, a pastors daughter. He had two sisters, Anna Maria and Maria Magdalena, and a younger brother Johann Heinrich. Soon after the birth of Leonhard, the Eulers moved from Basel to the town of Riehen, Paul Euler was a friend of the Bernoulli family, Johann Bernoulli was then regarded as Europes foremost mathematician, and would eventually be the most important influence on young Leonhard. Eulers formal education started in Basel, where he was sent to live with his maternal grandmother. In 1720, aged thirteen, he enrolled at the University of Basel, during that time, he was receiving Saturday afternoon lessons from Johann Bernoulli, who quickly discovered his new pupils incredible talent for mathematics. In 1726, Euler completed a dissertation on the propagation of sound with the title De Sono, at that time, he was unsuccessfully attempting to obtain a position at the University of Basel. In 1727, he first entered the Paris Academy Prize Problem competition, Pierre Bouguer, who became known as the father of naval architecture, won and Euler took second place. Euler later won this annual prize twelve times, around this time Johann Bernoullis two sons, Daniel and Nicolaus, were working at the Imperial Russian Academy of Sciences in Saint Petersburg. In November 1726 Euler eagerly accepted the offer, but delayed making the trip to Saint Petersburg while he applied for a physics professorship at the University of Basel. Euler arrived in Saint Petersburg on 17 May 1727 and he was promoted from his junior post in the medical department of the academy to a position in the mathematics department. He lodged with Daniel Bernoulli with whom he worked in close collaboration. Euler mastered Russian and settled life in Saint Petersburg. He also took on a job as a medic in the Russian Navy. The Academy at Saint Petersburg, established by Peter the Great, was intended to improve education in Russia, as a result, it was made especially attractive to foreign scholars like Euler
Leonhard Euler
–
Portrait by Jakob Emanuel Handmann (1756)
Leonhard Euler
–
1957 Soviet Union stamp commemorating the 250th birthday of Euler. The text says: 250 years from the birth of the great mathematician, academician Leonhard Euler.
Leonhard Euler
–
Stamp of the former German Democratic Republic honoring Euler on the 200th anniversary of his death. Across the centre it shows his polyhedral formula, nowadays written as " v − e + f = 2".
Leonhard Euler
–
Euler's grave at the Alexander Nevsky Monastery
35.
James Gregory (astronomer and mathematician)
–
James Gregory FRS was a Scottish mathematician and astronomer. His surname is spelt as Gregorie, the original Scottish spelling. In his book Geometriae Pars Universalis Gregory gave both the first published statement and proof of the theorem of the calculus, for which he was acknowledged by Isaac Barrow. It was his mother who endowed Gregory with his appetite for geometry, her uncle – Alexander Anderson – having been a pupil, after his fathers death in 1651 his elder brother David took over responsibility for his education. He attended Aberdeen Grammar School, and then Marischal College from 1653–1657, in 1663 he went to London, meeting John Collins and fellow Scot Robert Moray, one of the founders of the Royal Society. In 1664 he departed for the University of Padua, in the Venetian Republic, passing through Flanders, Paris, at Padua he lived in the house of his countryman James Caddenhead, the professor of philosophy, and he was taught by Stefano Angeli. He was successively professor at the University of St Andrews and the University of Edinburgh and he had married Mary, daughter of George Jameson, painter, and widow of John Burnet of Elrick, Aberdeen, their son James was Professor of Physics at Kings College, Aberdeen. He was the grandfather of John Gregory, uncle of David Gregorie and brother of David Gregory, about a year after assuming the Chair of Mathematics at Edinburgh, James Gregory suffered a stroke while viewing the moons of Jupiter with his students. He died a few days later at the age of 36, in the Optica Promota, published in 1663, Gregory described his design for a reflecting telescope, the Gregorian telescope. In 1667, Gregory issued his Vera Circuli et Hyperbolae Quadratura, in which he showed how the areas of the circle, nevertheless Gregory was effectively among the first to speculate about the existence of what are now termed transcendental numbers. In addition the first proof of the theorem of calculus. The book also contains series expansions of sin, cos, arcsin, Gregory was probably unaware that the earliest enunciations of these expansions were made by Madhava in India in the 14th century. The book was reprinted in 1668 with an appendix, Geometriae Pars, in his 1663 Optica Promota, James Gregory described his reflecting telescope which has come to be known by his name, the Gregorian telescope. Gregory pointed out that a telescope with a parabolic mirror would correct spherical aberration as well as the chromatic aberration seen in refracting telescopes. According to his own confession, Gregory had no practical skill, the Gregorian telescope design is rarely used today, as other types of reflecting telescopes are known to be more efficient for standard applications. Gregorian optics are used in radio telescopes such as Arecibo. The following excerpt is from the Pantologia, in 1671, or perhaps earlier, he established the theorem that θ = tan θ − tan 3 θ + tan 5 θ − …, the result being true only if θ lies between −π and π. This formula was used to calculate digits of π, although more efficient formulas were later discovered
James Gregory (astronomer and mathematician)
–
James Gregory (1638–1675)
James Gregory (astronomer and mathematician)
–
Vera circuli et hyperbolae quadratura, 1667
36.
Colin Maclaurin
–
Colin Maclaurin was a Scottish mathematician who made important contributions to geometry and algebra. The Maclaurin series, a case of the Taylor series, is named after him. Owing to changes in orthography since that time, his surname is alternatively written MacLaurin, Maclaurin was born in Kilmodan, Argyll. His father, Reverend and Minister of Glendaruel John Maclaurin, died when Maclaurin was in infancy and he was then educated under the care of his uncle, the Reverend Daniel Maclaurin, minister of Kilfinan. At eleven, Maclaurin entered the University of Glasgow and this record as the worlds youngest professor endured until March 2008, when the record was officially given to Alia Sabur. In the vacations of 1719 and 1721, Maclaurin went to London, where he acquainted with Sir Isaac Newton, Dr Benjamin Hoadly, Samuel Clarke, Martin Folkes. He was admitted a member of the Royal Society, in 1722, having provided a substitute for his class at Aberdeen, he traveled on the Continent as tutor to George Hume, the son of Alexander Hume, 2nd Earl of Marchmont. During their time in Lorraine, he wrote his essay on the percussion of bodies, upon the death of his pupil at Montpellier, Maclaurin returned to Aberdeen. In 1725 Maclaurin was appointed deputy to the professor at Edinburgh, James Gregory. On 3 November of that year Maclaurin succeeded Gregory, and went on to raise the character of that university as a school of science, Newton was so impressed with Maclaurin that he had offered to pay his salary himself. Maclaurin used Taylor series to characterize maxima, minima, and points of inflection for infinitely differentiable functions in his Treatise of Fluxions. Maclaurin attributed the series to Taylor, though the series was known before to Newton and Gregory, nevertheless, Maclaurin received credit for his use of the series, and the Taylor series expanded around 0 is sometimes known as the Maclaurin series. Maclaurin also made significant contributions to the attraction of ellipsoids. Clairaut, Euler, Laplace, Legendre, Poisson and Gauss, Maclaurin showed that an oblate spheroid was a possible equilibrium in Newtons theory of gravity. The subject continues to be of scientific interest, and Nobel Laureate Subramanyan Chandrasekhar dedicated a chapter of his book Ellipsoidal Figures of Equilibrium to Maclaurin spheroids, independently from Euler and using the same methods, Maclaurin discovered the Euler–Maclaurin formula. He used it to sum powers of arithmetic progressions, derive Stirlings formula, Maclaurin contributed to the study of elliptic integrals, reducing many intractable integrals to problems of finding arcs for hyperbolas. His work was continued by dAlembert and Euler, who gave a more concise approach and this publication preceded by two years Cramers publication of a generalization of the rule to n unknowns, now commonly known as Cramers rule. In 1733, Maclaurin married Anne Stewart, the daughter of Walter Stewart, Maclaurin actively opposed the Jacobite Rebellion of 1745 and superintended the operations necessary for the defence of Edinburgh against the Highland army
Colin Maclaurin
–
Colin Maclaurin (1698–1746)
Colin Maclaurin
–
Memorial, Greyfriars Kirkyard, Edinburgh
37.
Taylor series
–
In mathematics, a Taylor series is a representation of a function as an infinite sum of terms that are calculated from the values of the functions derivatives at a single point. The concept of a Taylor series was formulated by the Scottish mathematician James Gregory, a function can be approximated by using a finite number of terms of its Taylor series. Taylors theorem gives quantitative estimates on the error introduced by the use of such an approximation, the polynomial formed by taking some initial terms of the Taylor series is called a Taylor polynomial. The Taylor series of a function is the limit of that functions Taylor polynomials as the degree increases, a function may not be equal to its Taylor series, even if its Taylor series converges at every point. A function that is equal to its Taylor series in an interval is known as an analytic function in that interval. The Taylor series of a real or complex-valued function f that is differentiable at a real or complex number a is the power series f + f ′1. Which can be written in the more compact sigma notation as ∑ n =0 ∞ f n, N where n. denotes the factorial of n and f denotes the nth derivative of f evaluated at the point a. The derivative of order zero of f is defined to be f itself and 0 and 0. are both defined to be 1, when a =0, the series is also called a Maclaurin series. The Maclaurin series for any polynomial is the polynomial itself. The Maclaurin series for 1/1 − x is the geometric series 1 + x + x 2 + x 3 + ⋯ so the Taylor series for 1/x at a =1 is 1 − +2 −3 + ⋯. The Taylor series for the exponential function ex at a =0 is x 00, + ⋯ =1 + x + x 22 + x 36 + x 424 + x 5120 + ⋯ = ∑ n =0 ∞ x n n. The above expansion holds because the derivative of ex with respect to x is also ex and this leaves the terms n in the numerator and n. in the denominator for each term in the infinite sum. The Greek philosopher Zeno considered the problem of summing an infinite series to achieve a result, but rejected it as an impossibility. It was through Archimedess method of exhaustion that a number of progressive subdivisions could be performed to achieve a finite result. Liu Hui independently employed a similar method a few centuries later, in the 14th century, the earliest examples of the use of Taylor series and closely related methods were given by Madhava of Sangamagrama. The Kerala school of astronomy and mathematics further expanded his works with various series expansions, in the 17th century, James Gregory also worked in this area and published several Maclaurin series. It was not until 1715 however that a method for constructing these series for all functions for which they exist was finally provided by Brook Taylor. The Maclaurin series was named after Colin Maclaurin, a professor in Edinburgh, if f is given by a convergent power series in an open disc centered at b in the complex plane, it is said to be analytic in this disc
Taylor series
–
As the degree of the Taylor polynomial rises, it approaches the correct function. This image shows sin(x) and its Taylor approximations, polynomials of degree 1, 3, 5, 7, 9, 11 and 13.
38.
Complementary angles
–
In planar geometry, an angle is the figure formed by two rays, called the sides of the angle, sharing a common endpoint, called the vertex of the angle. Angles formed by two rays lie in a plane, but this plane does not have to be a Euclidean plane, Angles are also formed by the intersection of two planes in Euclidean and other spaces. Angles formed by the intersection of two curves in a plane are defined as the angle determined by the tangent rays at the point of intersection. Similar statements hold in space, for example, the angle formed by two great circles on a sphere is the dihedral angle between the planes determined by the great circles. Angle is also used to designate the measure of an angle or of a rotation and this measure is the ratio of the length of a circular arc to its radius. In the case of an angle, the arc is centered at the vertex. In the case of a rotation, the arc is centered at the center of the rotation and delimited by any other point and its image by the rotation. The word angle comes from the Latin word angulus, meaning corner, cognate words are the Greek ἀγκύλος, meaning crooked, curved, both are connected with the Proto-Indo-European root *ank-, meaning to bend or bow. Euclid defines a plane angle as the inclination to each other, in a plane, according to Proclus an angle must be either a quality or a quantity, or a relationship. In mathematical expressions, it is common to use Greek letters to serve as variables standing for the size of some angle, lower case Roman letters are also used, as are upper case Roman letters in the context of polygons. See the figures in this article for examples, in geometric figures, angles may also be identified by the labels attached to the three points that define them. For example, the angle at vertex A enclosed by the rays AB, sometimes, where there is no risk of confusion, the angle may be referred to simply by its vertex. However, in geometrical situations it is obvious from context that the positive angle less than or equal to 180 degrees is meant. Otherwise, a convention may be adopted so that ∠BAC always refers to the angle from B to C. Angles smaller than an angle are called acute angles. An angle equal to 1/4 turn is called a right angle, two lines that form a right angle are said to be normal, orthogonal, or perpendicular. Angles larger than an angle and smaller than a straight angle are called obtuse angles. An angle equal to 1/2 turn is called a straight angle, Angles larger than a straight angle but less than 1 turn are called reflex angles
Complementary angles
–
An angle enclosed by rays emanating from a vertex.
39.
Shape
–
A shape is the form of an object or its external boundary, outline, or external surface, as opposed to other properties such as color, texture, or material composition. Psychologists have theorized that humans mentally break down images into simple geometric shapes called geons, examples of geons include cones and spheres. Some simple shapes can be put into broad categories, for instance, polygons are classified according to their number of edges as triangles, quadrilaterals, pentagons, etc. Each of these is divided into categories, triangles can be equilateral, isosceles, obtuse, acute, scalene, etc. while quadrilaterals can be rectangles, rhombi, trapezoids, squares. Other common shapes are points, lines, planes, and conic sections such as ellipses, circles, among the most common 3-dimensional shapes are polyhedra, which are shapes with flat faces, ellipsoids, which are egg-shaped or sphere-shaped objects, cylinders, and cones. If an object falls into one of these categories exactly or even approximately, thus, we say that the shape of a manhole cover is a disk, because it is approximately the same geometric object as an actual geometric disk. Similarity, Two objects are similar if one can be transformed into the other by a scaling, together with a sequence of rotations, translations. Isotopy, Two objects are isotopic if one can be transformed into the other by a sequence of deformations that do not tear the object or put holes in it. Sometimes, two similar or congruent objects may be regarded as having a different shape if a reflection is required to transform one into the other. For instance, the b and d are a reflection of each other, and hence they are congruent and similar. Sometimes, only the outline or external boundary of the object is considered to determine its shape, for instance, an hollow sphere may be considered to have the same shape as a solid sphere. Procrustes analysis is used in many sciences to determine whether or not two objects have the shape, or to measure the difference between two shapes. In advanced mathematics, quasi-isometry can be used as a criterion to state that two shapes are approximately the same. Simple shapes can often be classified into basic objects such as a point, a line, a curve, a plane. However, most shapes occurring in the world are complex. Some, such as plant structures and coastlines, may be so complicated as to defy traditional mathematical description – in which case they may be analyzed by differential geometry, or as fractals. In geometry, two subsets of a Euclidean space have the shape if one can be transformed to the other by a combination of translations, rotations. In other words, the shape of a set of points is all the information that is invariant to translations, rotations
Shape
–
A variety of polygonal shapes.
40.
Ratio
–
In mathematics, a ratio is a relationship between two numbers indicating how many times the first number contains the second. For example, if a bowl of fruit contains eight oranges and six lemons, thus, a ratio can be a fraction as opposed to a whole number. Also, in example the ratio of lemons to oranges is 6,8. The numbers compared in a ratio can be any quantities of a kind, such as objects, persons, lengths. A ratio is written a to b or a, b, when the two quantities have the same units, as is often the case, their ratio is a dimensionless number. A rate is a quotient of variables having different units, but in many applications, the word ratio is often used instead for this more general notion as well. The numbers A and B are sometimes called terms with A being the antecedent, the proportion expressing the equality of the ratios A, B and C, D is written A, B = C, D or A, B, C, D. This latter form, when spoken or written in the English language, is expressed as A is to B as C is to D. A, B, C and D are called the terms of the proportion. A and D are called the extremes, and B and C are called the means, the equality of three or more proportions is called a continued proportion. Ratios are sometimes used three or more terms. The ratio of the dimensions of a two by four that is ten inches long is 2,4,10, a good concrete mix is sometimes quoted as 1,2,4 for the ratio of cement to sand to gravel. It is impossible to trace the origin of the concept of ratio because the ideas from which it developed would have been familiar to preliterate cultures. For example, the idea of one village being twice as large as another is so basic that it would have been understood in prehistoric society, however, it is possible to trace the origin of the word ratio to the Ancient Greek λόγος. Early translators rendered this into Latin as ratio, a more modern interpretation of Euclids meaning is more akin to computation or reckoning. Medieval writers used the word to indicate ratio and proportionalitas for the equality of ratios, Euclid collected the results appearing in the Elements from earlier sources. The Pythagoreans developed a theory of ratio and proportion as applied to numbers, the discovery of a theory of ratios that does not assume commensurability is probably due to Eudoxus of Cnidus. The exposition of the theory of proportions that appears in Book VII of The Elements reflects the earlier theory of ratios of commensurables, the existence of multiple theories seems unnecessarily complex to modern sensibility since ratios are, to a large extent, identified with quotients. This is a recent development however, as can be seen from the fact that modern geometry textbooks still use distinct terminology and notation for ratios
Ratio
–
The ratio of width to height of standard-definition television.
41.
Cosine
–
In mathematics, the trigonometric functions are functions of an angle. They relate the angles of a triangle to the lengths of its sides, trigonometric functions are important in the study of triangles and modeling periodic phenomena, among many other applications. The most familiar trigonometric functions are the sine, cosine, more precise definitions are detailed below. Trigonometric functions have a range of uses including computing unknown lengths. In this use, trigonometric functions are used, for instance, in navigation, engineering, a common use in elementary physics is resolving a vector into Cartesian coordinates. In modern usage, there are six basic trigonometric functions, tabulated here with equations that relate them to one another and that is, for any similar triangle the ratio of the hypotenuse and another of the sides remains the same. If the hypotenuse is twice as long, so are the sides and it is these ratios that the trigonometric functions express. To define the functions for the angle A, start with any right triangle that contains the angle A. The three sides of the triangle are named as follows, The hypotenuse is the side opposite the right angle, the hypotenuse is always the longest side of a right-angled triangle. The opposite side is the side opposite to the angle we are interested in, in this side a. The adjacent side is the side having both the angles of interest, in this case side b, in ordinary Euclidean geometry, according to the triangle postulate, the inside angles of every triangle total 180°. Therefore, in a triangle, the two non-right angles total 90°, so each of these angles must be in the range of as expressed in interval notation. The following definitions apply to angles in this 0° – 90° range and they can be extended to the full set of real arguments by using the unit circle, or by requiring certain symmetries and that they be periodic functions. For example, the figure shows sin for angles θ, π − θ, π + θ, and 2π − θ depicted on the unit circle and as a graph. The value of the sine repeats itself apart from sign in all four quadrants, and if the range of θ is extended to additional rotations, the trigonometric functions are summarized in the following table and described in more detail below. The angle θ is the angle between the hypotenuse and the adjacent line – the angle at A in the accompanying diagram, the sine of an angle is the ratio of the length of the opposite side to the length of the hypotenuse. In our case sin A = opposite hypotenuse = a h and this ratio does not depend on the size of the particular right triangle chosen, as long as it contains the angle A, since all such triangles are similar. The cosine of an angle is the ratio of the length of the adjacent side to the length of the hypotenuse, in our case cos A = adjacent hypotenuse = b h
Cosine
–
Trigonometric functions in the complex plane
Cosine
–
Trigonometry
Cosine
Cosine
42.
Adjacent side (right triangle)
–
A triangle is a polygon with three edges and three vertices. It is one of the shapes in geometry. A triangle with vertices A, B, and C is denoted △ A B C, in Euclidean geometry any three points, when non-collinear, determine a unique triangle and a unique plane. This article is about triangles in Euclidean geometry except where otherwise noted, triangles can be classified according to the lengths of their sides, An equilateral triangle has all sides the same length. An equilateral triangle is also a polygon with all angles measuring 60°. An isosceles triangle has two sides of equal length, some mathematicians define an isosceles triangle to have exactly two equal sides, whereas others define an isosceles triangle as one with at least two equal sides. The latter definition would make all equilateral triangles isosceles triangles, the 45–45–90 right triangle, which appears in the tetrakis square tiling, is isosceles. A scalene triangle has all its sides of different lengths, equivalently, it has all angles of different measure. Hatch marks, also called tick marks, are used in diagrams of triangles, a side can be marked with a pattern of ticks, short line segments in the form of tally marks, two sides have equal lengths if they are both marked with the same pattern. In a triangle, the pattern is no more than 3 ticks. Similarly, patterns of 1,2, or 3 concentric arcs inside the angles are used to indicate equal angles, triangles can also be classified according to their internal angles, measured here in degrees. A right triangle has one of its interior angles measuring 90°, the side opposite to the right angle is the hypotenuse, the longest side of the triangle. The other two sides are called the legs or catheti of the triangle, special right triangles are right triangles with additional properties that make calculations involving them easier. One of the two most famous is the 3–4–5 right triangle, where 32 +42 =52, in this situation,3,4, and 5 are a Pythagorean triple. The other one is a triangle that has 2 angles that each measure 45 degrees. Triangles that do not have an angle measuring 90° are called oblique triangles, a triangle with all interior angles measuring less than 90° is an acute triangle or acute-angled triangle. If c is the length of the longest side, then a2 + b2 > c2, a triangle with one interior angle measuring more than 90° is an obtuse triangle or obtuse-angled triangle. If c is the length of the longest side, then a2 + b2 < c2, a triangle with an interior angle of 180° is degenerate
Adjacent side (right triangle)
–
The Flatiron Building in New York is shaped like a triangular prism
Adjacent side (right triangle)
–
A triangle
43.
Tangent (trigonometric function)
–
In mathematics, the trigonometric functions are functions of an angle. They relate the angles of a triangle to the lengths of its sides, trigonometric functions are important in the study of triangles and modeling periodic phenomena, among many other applications. The most familiar trigonometric functions are the sine, cosine, more precise definitions are detailed below. Trigonometric functions have a range of uses including computing unknown lengths. In this use, trigonometric functions are used, for instance, in navigation, engineering, a common use in elementary physics is resolving a vector into Cartesian coordinates. In modern usage, there are six basic trigonometric functions, tabulated here with equations that relate them to one another and that is, for any similar triangle the ratio of the hypotenuse and another of the sides remains the same. If the hypotenuse is twice as long, so are the sides and it is these ratios that the trigonometric functions express. To define the functions for the angle A, start with any right triangle that contains the angle A. The three sides of the triangle are named as follows, The hypotenuse is the side opposite the right angle, the hypotenuse is always the longest side of a right-angled triangle. The opposite side is the side opposite to the angle we are interested in, in this side a. The adjacent side is the side having both the angles of interest, in this case side b, in ordinary Euclidean geometry, according to the triangle postulate, the inside angles of every triangle total 180°. Therefore, in a triangle, the two non-right angles total 90°, so each of these angles must be in the range of as expressed in interval notation. The following definitions apply to angles in this 0° – 90° range and they can be extended to the full set of real arguments by using the unit circle, or by requiring certain symmetries and that they be periodic functions. For example, the figure shows sin for angles θ, π − θ, π + θ, and 2π − θ depicted on the unit circle and as a graph. The value of the sine repeats itself apart from sign in all four quadrants, and if the range of θ is extended to additional rotations, the trigonometric functions are summarized in the following table and described in more detail below. The angle θ is the angle between the hypotenuse and the adjacent line – the angle at A in the accompanying diagram, the sine of an angle is the ratio of the length of the opposite side to the length of the hypotenuse. In our case sin A = opposite hypotenuse = a h and this ratio does not depend on the size of the particular right triangle chosen, as long as it contains the angle A, since all such triangles are similar. The cosine of an angle is the ratio of the length of the adjacent side to the length of the hypotenuse, in our case cos A = adjacent hypotenuse = b h
Tangent (trigonometric function)
–
Trigonometric functions in the complex plane
Tangent (trigonometric function)
–
Trigonometry
Tangent (trigonometric function)
Tangent (trigonometric function)
44.
Infinite series
–
In mathematics, a series is, informally speaking, the sum of the terms of an infinite sequence. The sum of a sequence has defined first and last terms. To emphasize that there are a number of terms, a series is often called an infinite series. In order to make the notion of an infinite sum mathematically rigorous, given an infinite sequence, the associated series is the expression obtained by adding all those terms together, a 1 + a 2 + a 3 + ⋯. These can be written compactly as ∑ i =1 ∞ a i, by using the summation symbol ∑. The sequence can be composed of any kind of object for which addition is defined. A series is evaluated by examining the finite sums of the first n terms of a sequence, called the nth partial sum of the sequence, and taking the limit as n approaches infinity. If this limit does not exist, the infinite sum cannot be assigned a value, and, in this case, the series is said to be divergent. On the other hand, if the partial sums tend to a limit when the number of terms increases indefinitely, then the series is said to be convergent, and the limit is called the sum of the series. An example is the series from Zenos dichotomy and its mathematical representation, ∑ n =1 ∞12 n =12 +14 +18 + ⋯. The study of series is a part of mathematical analysis. Series are used in most areas of mathematics, even for studying finite structures, in addition to their ubiquity in mathematics, infinite series are also widely used in other quantitative disciplines such as physics, computer science, statistics and finance. For any sequence of numbers, real numbers, complex numbers, functions thereof. By definition the series ∑ n =0 ∞ a n converges to a limit L if and this definition is usually written as L = ∑ n =0 ∞ a n ⇔ L = lim k → ∞ s k. When the index set is the natural numbers I = N, a series indexed on the natural numbers is an ordered formal sum and so we rewrite ∑ n ∈ N as ∑ n =0 ∞ in order to emphasize the ordering induced by the natural numbers. Thus, we obtain the common notation for a series indexed by the natural numbers ∑ n =0 ∞ a n = a 0 + a 1 + a 2 + ⋯. When the semigroup G is also a space, then the series ∑ n =0 ∞ a n converges to an element L ∈ G if. This definition is usually written as L = ∑ n =0 ∞ a n ⇔ L = lim k → ∞ s k, a series ∑an is said to converge or to be convergent when the sequence SN of partial sums has a finite limit
Infinite series
–
Illustration of 3 geometric series with partial sums from 1 to 6 terms. The dashed line represents the limit.
45.
Mnemonic
–
A mnemonic device, or memory device is any learning technique that aids information retention in the human memory. Mnemonics make use of encoding, retrieval cues, and imagery as specific tools to encode any given information in a way that allows for efficient storage. Mnemonics aid original information in becoming associated with something more meaningful—which, in turn, the word mnemonic is derived from the Ancient Greek word μνημονικός, meaning of memory, or relating to memory and is related to Mnemosyne, the name of the goddess of memory in Greek mythology. Both of these words are derived from μνήμη, remembrance, memory, mnemonics in antiquity were most often considered in the context of what is today known as the art of memory. Ancient Greeks and Romans distinguished between two types of memory, the memory and the artificial memory. The former is inborn, and is the one that everyone uses instinctively, the latter in contrast has to be trained and developed through the learning and practice of a variety of mnemonic techniques. Mnemonic systems are techniques or strategies consciously used to improve memory and they help use information already stored in long-term memory to make memorisation an easier task. Mnemonic devices were much cultivated by Greek sophists and philosophers and are referred to by Plato. In later times the poet Simonides was credited for development of these techniques, the Romans valued such helps in order to support facility in public speaking. The Greek and the Roman system of mnemonics was founded on the use of mental places and signs or pictures, to recall these, an individual had only to search over the apartments of the house until discovering the places where images had been placed by the imagination. Except that the rules of mnemonics are referred to by Martianus Capella, among the voluminous writings of Roger Bacon is a tractate De arte memorativa. Ramon Llull devoted special attention to mnemonics in connection with his ars generalis, about the end of the 15th century, Petrus de Ravenna provoked such astonishment in Italy by his mnemonic feats that he was believed by many to be a necromancer. His Phoenix artis memoriae went through as many as nine editions, about the end of the 16th century, Lambert Schenkel, who taught mnemonics in France, Italy and Germany, similarly surprised people with his memory. He was denounced as a sorcerer by the University of Louvain, the most complete account of his system is given in two works by his pupil Martin Sommer, published in Venice in 1619. In 1618 John Willis published Mnemonica, sive ars reminiscendi, containing a statement of the principles of topical or local mnemonics. Giordano Bruno included a memoria technica in his treatise De umbris idearum, other writers of this period are the Florentine Publicius, Johannes Romberch, Hieronimo Morafiot, Ars memoriae, and B. The philosopher Gottfried Wilhelm Leibniz adopted a very similar to that of Wennsshein for his scheme of a form of writing common to all languages. Wennssheins method was adopted with slight changes afterward by the majority of subsequent original systems and it was modified and supplemented by Richard Grey, a priest who published a Memoria technica in 1730
Mnemonic
–
Detail of Giordano Bruno 's statue in Rome. Bruno was famous for his mnemonics, some of which he included in his treatises De umbris idearum and Ars Memoriae.
Mnemonic
–
Knuckle mnemonic for the number of days in each month of the Gregorian Calendar. Each knuckle represents a 31-day month.
46.
Interpolate
–
In the mathematical field of numerical analysis, interpolation is a method of constructing new data points within the range of a discrete set of known data points. It is often required to interpolate the value of that function for a value of the independent variable. A different problem which is related to interpolation is the approximation of a complicated function by a simple function. Suppose the formula for some given function is known, but too complex to evaluate efficiently, a few known data points from the original function can be used to create an interpolation based on a simpler function. In the examples below if we consider x as a topological space, the classical results about interpolation of operators are the Riesz–Thorin theorem and the Marcinkiewicz theorem. There are also many other subsequent results, for example, suppose we have a table like this, which gives some values of an unknown function f. Interpolation provides a means of estimating the function at intermediate points, there are many different interpolation methods, some of which are described below. Some of the concerns to take into account when choosing an appropriate algorithm are, how many data points are needed. The simplest interpolation method is to locate the nearest data value, one of the simplest methods is linear interpolation. Consider the above example of estimating f, since 2.5 is midway between 2 and 3, it is reasonable to take f midway between f =0.9093 and f =0.1411, which yields 0.5252. Another disadvantage is that the interpolant is not differentiable at the point xk, the following error estimate shows that linear interpolation is not very precise. Denote the function which we want to interpolate by g, then the linear interpolation error is | f − g | ≤ C2 where C =18 max r ∈ | g ″ |. In words, the error is proportional to the square of the distance between the data points, the error in some other methods, including polynomial interpolation and spline interpolation, is proportional to higher powers of the distance between the data points. These methods also produce smoother interpolants, polynomial interpolation is a generalization of linear interpolation. Note that the interpolant is a linear function. We now replace this interpolant with a polynomial of higher degree, consider again the problem given above. The following sixth degree polynomial goes through all the seven points, substituting x =2.5, we find that f =0.5965. Generally, if we have n points, there is exactly one polynomial of degree at most n−1 going through all the data points
Interpolate
–
An interpolation of a finite set of points on an epitrochoid. Points through which curve is splined are red; the blue curve connecting them is interpolation.
47.
Scientific calculator
–
A scientific calculator is a type of electronic calculator, usually but not always handheld, designed to calculate problems in science, engineering, and mathematics. They have almost completely replaced slide rules in almost all traditional applications, there is also some overlap with the financial calculator market. A few have multi-line displays, with recent models from Hewlett-Packard, Texas Instruments, Casio, Sharp. By providing a method to enter an entire problem in as it is written on the page using simple formatting tools, the HP-35, introduced on February 1,1972, was Hewlett-Packards first pocket calculator and the worlds first handheld scientific calculator. Like some of HPs desktop calculators it used RPN, introduced at US$395, the HP-35 was available from 1972 to 1975. Texas Instruments, after the introduction of units with scientific notation, came out with a handheld scientific calculator on January 15,1974. TI continues to be a player in the calculator market. Casio and Sharp have also been major players, with Casios fx series being a common brand. Casio is also a player in the graphing calculator market
Scientific calculator
–
Casio FX-77, a solar-powered scientific calculator from the 1980s using a single-line display
Scientific calculator
–
The TI-84 Plus: A typical graphing calculator by Texas Instruments
Scientific calculator
–
A modern scientific calculator with a dot matrix LCD display
48.
Programming language
–
A programming language is a formal computer language designed to communicate instructions to a machine, particularly a computer. Programming languages can be used to programs to control the behavior of a machine or to express algorithms. From the early 1800s, programs were used to direct the behavior of such as Jacquard looms. Thousands of different programming languages have created, mainly in the computer field. Many programming languages require computation to be specified in an imperative form while other languages use forms of program specification such as the declarative form. The description of a language is usually split into the two components of syntax and semantics. Some languages are defined by a document while other languages have a dominant implementation that is treated as a reference. Some languages have both, with the language defined by a standard and extensions taken from the dominant implementation being common. A programming language is a notation for writing programs, which are specifications of a computation or algorithm, some, but not all, authors restrict the term programming language to those languages that can express all possible algorithms. For example, PostScript programs are created by another program to control a computer printer or display. More generally, a language may describe computation on some, possibly abstract. It is generally accepted that a specification for a programming language includes a description, possibly idealized. In most practical contexts, a programming language involves a computer, consequently, abstractions Programming languages usually contain abstractions for defining and manipulating data structures or controlling the flow of execution. Expressive power The theory of computation classifies languages by the computations they are capable of expressing, all Turing complete languages can implement the same set of algorithms. ANSI/ISO SQL-92 and Charity are examples of languages that are not Turing complete, markup languages like XML, HTML, or troff, which define structured data, are not usually considered programming languages. Programming languages may, however, share the syntax with markup languages if a computational semantics is defined, XSLT, for example, is a Turing complete XML dialect. Moreover, LaTeX, which is used for structuring documents. The term computer language is used interchangeably with programming language
Programming language
–
The Manchester Mark 1 ran programs written in Autocode from 1952.
Programming language
–
A selection of textbooks that teach programming, in languages both popular and obscure. These are only a few of the thousands of programming languages and dialects that have been designed in history.
49.
Floating point unit
–
A floating-point unit is a part of a computer system specially designed to carry out operations on floating point numbers. Typical operations are addition, subtraction, multiplication, division, square root, some systems can also perform various transcendental functions such as exponential or trigonometric calculations, though in most modern processors these are done with software library routines. This could be an integrated circuit, an entire circuit board or a cabinet. Where floating-point calculation hardware has not been provided, floating point calculations are done in software, emulation can be implemented on any of several levels, in the CPU as microcode, as an operating system function, or in user space code. When only integer functionality is available the CORDIC floating point emulation methods are most commonly used, in most modern computer architectures, there is some division of floating-point operations from integer operations. This division varies significantly by architecture, some, like the Intel x86 have dedicated floating-point registers, in earlier superscalar architectures without general out-of-order execution, floating-point operations were sometimes pipelined separately from integer operations. Since the early 1990s, many microprocessors for desktops and servers have more than one FPU, the modular architecture of Bulldozer microarchitecture uses a special FPU named FlexFPU, which uses simultaneous multithreading. Each physical integer core, two per module, is threaded, in contrast with Intels Hyperthreading, where two virtual simultaneous threads share the resources of a single physical core. Some floating-point hardware only supports the simplest operations - addition, subtraction, but even the most complex floating-point hardware has a finite number of operations it can support - for example, none of them directly support arbitrary-precision arithmetic. When a CPU is executing a program calls for a floating-point operation that is not directly supported by the hardware. In systems without any floating-point hardware, the CPU emulates it using a series of simpler fixed-point arithmetic operations that run on the arithmetic logic unit. The software that lists the series of operations to emulate floating-point operations is often packaged in a floating-point library. In some cases, FPUs may be specialized, and divided between simpler floating-point operations and more complicated operations, like division, in some cases, only the simple operations may be implemented in hardware or microcode, while the more complex operations are implemented as software. In the 1980s, it was common in IBM PC/compatible microcomputers for the FPU to be separate from the CPU. It would only be purchased if needed to speed up or enable math-intensive programs, the IBM PC, XT, and most compatibles based on the 8088 or 8086 had a socket for the optional 8087 coprocessor. Other companies manufactured co-processors for the Intel x86 series, coprocessors were available for the Motorola 68000 family, the 68881 and 68882. These were common in Motorola 68020/68030-based workstations like the Sun 3 series, there are also add-on FPUs coprocessor units for microcontroller units /single-board computer, which serve to provide floating-point arithmetic capability. These add-on FPUs are host-processor-independent, possess their own programming requirements and are provided with their own integrated development environments
Floating point unit
–
An Intel 80287
50.
Light
–
Light is electromagnetic radiation within a certain portion of the electromagnetic spectrum. The word usually refers to light, which is visible to the human eye and is responsible for the sense of sight. Visible light is defined as having wavelengths in the range of 400–700 nanometres, or 4.00 × 10−7 to 7.00 × 10−7 m. This wavelength means a range of roughly 430–750 terahertz. The main source of light on Earth is the Sun, sunlight provides the energy that green plants use to create sugars mostly in the form of starches, which release energy into the living things that digest them. This process of photosynthesis provides virtually all the used by living things. Historically, another important source of light for humans has been fire, with the development of electric lights and power systems, electric lighting has effectively replaced firelight. Some species of animals generate their own light, a process called bioluminescence, for example, fireflies use light to locate mates, and vampire squids use it to hide themselves from prey. Visible light, as all types of electromagnetic radiation, is experimentally found to always move at this speed in a vacuum. In physics, the term sometimes refers to electromagnetic radiation of any wavelength. In this sense, gamma rays, X-rays, microwaves and radio waves are also light, like all types of light, visible light is emitted and absorbed in tiny packets called photons and exhibits properties of both waves and particles. This property is referred to as the wave–particle duality, the study of light, known as optics, is an important research area in modern physics. Generally, EM radiation, or EMR, is classified by wavelength into radio, microwave, infrared, the behavior of EMR depends on its wavelength. Higher frequencies have shorter wavelengths, and lower frequencies have longer wavelengths, when EMR interacts with single atoms and molecules, its behavior depends on the amount of energy per quantum it carries. There exist animals that are sensitive to various types of infrared, infrared sensing in snakes depends on a kind of natural thermal imaging, in which tiny packets of cellular water are raised in temperature by the infrared radiation. EMR in this range causes molecular vibration and heating effects, which is how these animals detect it, above the range of visible light, ultraviolet light becomes invisible to humans, mostly because it is absorbed by the cornea below 360 nanometers and the internal lens below 400. Furthermore, the rods and cones located in the retina of the eye cannot detect the very short ultraviolet wavelengths and are in fact damaged by ultraviolet. Many animals with eyes that do not require lenses are able to detect ultraviolet, by quantum photon-absorption mechanisms, various sources define visible light as narrowly as 420 to 680 to as broadly as 380 to 800 nm
Light
–
An example of refraction of light. The straw appears bent, because of refraction of light as it enters liquid from air.
Light
–
A triangular prism dispersing a beam of white light. The longer wavelengths (red) and the shorter wavelengths (blue) get separated.
Light
–
A cloud illuminated by sunlight
Light
–
A city illuminated by artificial lighting
51.
Music theory
–
Music theory is the study of the practices and possibilities of music. The term is used in three ways in music, though all three are interrelated. The first is what is otherwise called rudiments, currently taught as the elements of notation, of key signatures, of time signatures, of rhythmic notation, Theory in this sense is treated as the necessary preliminary to the study of harmony, counterpoint, and form. The second is the study of writings about music from ancient times onwards, Music theory is frequently concerned with describing how musicians and composers make music, including tuning systems and composition methods among other topics. However, this medieval discipline became the basis for tuning systems in later centuries, Music theory as a practical discipline encompasses the methods and concepts composers and other musicians use in creating music. The development, preservation, and transmission of music theory in this sense may be found in oral and written music-making traditions, musical instruments, and other artifacts. In ancient and living cultures around the world, the deep and long roots of music theory are clearly visible in instruments, oral traditions, and current music making. Many cultures, at least as far back as ancient Mesopotamia and ancient China, have also considered music theory in more formal ways such as written treatises, in modern academia, music theory is a subfield of musicology, the wider study of musical cultures and history. Etymologically, music theory is an act of contemplation of music, from the Greek θεωρία, a looking at, viewing, contemplation, speculation, theory, also a sight, a person who researches, teaches, or writes articles about music theory is a music theorist. University study, typically to the M. A. or Ph. D level, is required to teach as a music theorist in a US or Canadian university. Methods of analysis include mathematics, graphic analysis, and especially analysis enabled by Western music notation, comparative, descriptive, statistical, and other methods are also used. See for instance Paleolithic flutes, Gǔdí, and Anasazi flute, several surviving Sumerian and Akkadian clay tablets include musical information of a theoretical nature, mainly lists of intervals and tunings. The scholar Sam Mirelman reports that the earliest of these dates from before 1500 BCE. Further, All the Mesopotamian texts are united by the use of a terminology for music that, much of Chinese music history and theory remains unclear. The earliest texts about Chinese music theory are inscribed on the stone and they include more than 2800 words describing theories and practices of music pitches of the time. The bells produce two intertwined pentatonic scales three tones apart with additional pitches completing the chromatic scale, Chinese theory starts from numbers, the main musical numbers being twelve, five and eight. Twelve refers to the number of pitches on which the scales can be constructed, the Lüshi chunqiu from about 239 BCE recalls the legend of Ling Lun. On order of the Yellow Emperor, Ling Lun collected twelve bamboo lengths with thick, blowing on one of these like a pipe, he found its sound agreeable and named it huangzhong, the Yellow Bell
Music theory
–
Ancient Egyptian musicians playing lutes in an ensemble.
Music theory
–
Pythagoras and Philolaus engaged in theoretical investigations, in a woodcut from Franchinus Gaffurius, Theorica musicæ (1492)
Music theory
–
A set of bells from China, 5th Century BCE.
Music theory
–
Barbershop quartets, such as this US Navy group, sing 4-part pieces, made up of a melody line (normally the second-highest voice, called the "lead") and 3 harmony parts.
52.
Acoustics
–
Acoustics is the interdisciplinary science that deals with the study of all mechanical waves in gases, liquids, and solids including topics such as vibration, sound, ultrasound and infrasound. A scientist who works in the field of acoustics is an acoustician while someone working in the field of technology may be called an acoustical engineer. The application of acoustics is present in almost all aspects of society with the most obvious being the audio. Hearing is one of the most crucial means of survival in the animal world, accordingly, the science of acoustics spreads across many facets of human society—music, medicine, architecture, industrial production, warfare and more. Likewise, animal species such as songbirds and frogs use sound, art, craft, science and technology have provoked one another to advance the whole, as in many other fields of knowledge. Robert Bruce Lindsays Wheel of Acoustics is a well accepted overview of the fields in acoustics. The word acoustic is derived from the Greek word ἀκουστικός, meaning of or for hearing, ready to hear and that from ἀκουστός, heard, audible, which in turn derives from the verb ἀκούω, I hear. The Latin synonym is sonic, after which the term used to be a synonym for acoustics. Frequencies above and below the range are called ultrasonic and infrasonic. If, for example, a string of a length would sound particularly harmonious with a string of twice the length. In modern parlance, if a string sounds the note C when plucked, a string twice as long will sound a C an octave lower. In one system of tuning, the tones in between are then given by 16,9 for D,8,5 for E,3,2 for F,4,3 for G,6,5 for A. Aristotle understood that sound consisted of compressions and rarefactions of air which falls upon, a very good expression of the nature of wave motion. The physical understanding of acoustical processes advanced rapidly during and after the Scientific Revolution, mainly Galileo Galilei but also Marin Mersenne, independently, discovered the complete laws of vibrating strings. Experimental measurements of the speed of sound in air were carried out successfully between 1630 and 1680 by a number of investigators, prominently Mersenne, meanwhile, Newton derived the relationship for wave velocity in solids, a cornerstone of physical acoustics. The eighteenth century saw advances in acoustics as mathematicians applied the new techniques of calculus to elaborate theories of sound wave propagation. Also in the 19th century, Wheatstone, Ohm, and Henry developed the analogy between electricity and acoustics, the twentieth century saw a burgeoning of technological applications of the large body of scientific knowledge that was by then in place. The first such application was Sabine’s groundbreaking work in architectural acoustics, Underwater acoustics was used for detecting submarines in the first World War
Acoustics
–
Principles of acoustics were applied since ancient times: Roman theatre in the city of Amman.
Acoustics
–
Artificial omni-directional sound source in an anechoic chamber
Acoustics
–
Jay Pritzker Pavilion
Acoustics
53.
Optics
–
Optics is the branch of physics which involves the behaviour and properties of light, including its interactions with matter and the construction of instruments that use or detect it. Optics usually describes the behaviour of visible, ultraviolet, and infrared light, because light is an electromagnetic wave, other forms of electromagnetic radiation such as X-rays, microwaves, and radio waves exhibit similar properties. Most optical phenomena can be accounted for using the classical description of light. Complete electromagnetic descriptions of light are, however, often difficult to apply in practice, practical optics is usually done using simplified models. The most common of these, geometric optics, treats light as a collection of rays that travel in straight lines, physical optics is a more comprehensive model of light, which includes wave effects such as diffraction and interference that cannot be accounted for in geometric optics. Historically, the model of light was developed first, followed by the wave model of light. Progress in electromagnetic theory in the 19th century led to the discovery that waves were in fact electromagnetic radiation. Some phenomena depend on the fact that light has both wave-like and particle-like properties, explanation of these effects requires quantum mechanics. When considering lights particle-like properties, the light is modelled as a collection of particles called photons, quantum optics deals with the application of quantum mechanics to optical systems. Optical science is relevant to and studied in many related disciplines including astronomy, various engineering fields, photography, practical applications of optics are found in a variety of technologies and everyday objects, including mirrors, lenses, telescopes, microscopes, lasers, and fibre optics. Optics began with the development of lenses by the ancient Egyptians and Mesopotamians, the earliest known lenses, made from polished crystal, often quartz, date from as early as 700 BC for Assyrian lenses such as the Layard/Nimrud lens. The ancient Romans and Greeks filled glass spheres with water to make lenses, the word optics comes from the ancient Greek word ὀπτική, meaning appearance, look. Greek philosophy on optics broke down into two opposing theories on how vision worked, the theory and the emission theory. The intro-mission approach saw vision as coming from objects casting off copies of themselves that were captured by the eye, plato first articulated the emission theory, the idea that visual perception is accomplished by rays emitted by the eyes. He also commented on the parity reversal of mirrors in Timaeus, some hundred years later, Euclid wrote a treatise entitled Optics where he linked vision to geometry, creating geometrical optics. Ptolemy, in his treatise Optics, held a theory of vision, the rays from the eye formed a cone, the vertex being within the eye. The rays were sensitive, and conveyed back to the observer’s intellect about the distance. He summarised much of Euclid and went on to describe a way to measure the angle of refraction, during the Middle Ages, Greek ideas about optics were resurrected and extended by writers in the Muslim world
Optics
–
Optics includes study of dispersion of light.
Optics
–
The Nimrud lens
Optics
–
Reproduction of a page of Ibn Sahl 's manuscript showing his knowledge of the law of refraction, now known as Snell's law
Optics
–
Cover of the first edition of Newton's Opticks
54.
Electronics
–
Electronics is the science of controlling electrical energy electrically, in which the electrons have a fundamental role. Commonly, electronic devices contain circuitry consisting primarily or exclusively of active semiconductors supplemented with passive elements, the science of electronics is also considered to be a branch of physics and electrical engineering. The ability of electronic devices to act as switches makes digital information processing possible, until 1950 this field was called radio technology because its principal application was the design and theory of radio transmitters, receivers, and vacuum tubes. Today, most electronic devices use semiconductor components to perform electron control and this article focuses on engineering aspects of electronics. Components are generally intended to be connected together, usually by being soldered to a circuit board. Components may be packaged singly, or in more complex groups as integrated circuits, some common electronic components are capacitors, inductors, resistors, diodes, transistors, etc. Components are often categorized as active or passive, vacuum tubes were among the earliest electronic components. They were almost solely responsible for the revolution of the first half of the Twentieth Century. They took electronics from parlor tricks and gave us radio, television, phonographs, radar, long distance telephony and they played a leading role in the field of microwave and high power transmission as well as television receivers until the middle of the 1980s. Since that time, solid state devices have all but completely taken over, vacuum tubes are still used in some specialist applications such as high power RF amplifiers, cathode ray tubes, specialist audio equipment, guitar amplifiers and some microwave devices. The 608 contained more than 3,000 germanium transistors, thomas J. Watson Jr. ordered all future IBM products to use transistors in their design. From that time on transistors were almost exclusively used for computer logic, circuits and components can be divided into two groups, analog and digital. A particular device may consist of circuitry that has one or the other or a mix of the two types, most analog electronic appliances, such as radio receivers, are constructed from combinations of a few types of basic circuits. Analog circuits use a range of voltage or current as opposed to discrete levels as in digital circuits. The number of different analog circuits so far devised is huge, especially because a circuit can be defined as anything from a single component, analog circuits are sometimes called linear circuits although many non-linear effects are used in analog circuits such as mixers, modulators, etc. Good examples of analog circuits include vacuum tube and transistor amplifiers, one rarely finds modern circuits that are entirely analog. These days analog circuitry may use digital or even microprocessor techniques to improve performance and this type of circuit is usually called mixed signal rather than analog or digital. Sometimes it may be difficult to differentiate between analog and digital circuits as they have elements of both linear and non-linear operation, an example is the comparator which takes in a continuous range of voltage but only outputs one of two levels as in a digital circuit
Electronics
–
Surface-mount electronic components
Electronics
–
Electronics Technician performing a voltage check on a power circuit card in the air navigation equipment room aboard the aircraft carrier USS Abraham Lincoln (CVN 72).
Electronics
–
Hitachi J100 adjustable frequency drive chassis
55.
Ultrasound
–
Ultrasound is sound waves with frequencies higher than the upper audible limit of human hearing. Ultrasound is no different from normal sound in its physical properties and this limit varies from person to person and is approximately 20 kilohertz in healthy, young adults. Ultrasound devices operate with frequencies from 20 kHz up to several gigahertz, Ultrasound is used in many different fields. Ultrasonic devices are used to detect objects and measure distances, Ultrasound imaging or sonography is often used in medicine. In the nondestructive testing of products and structures, ultrasound is used to detect invisible flaws, industrially, ultrasound is used for cleaning, mixing, and to accelerate chemical processes. Animals such as bats and porpoises use ultrasound for locating prey, scientist are also studying ultrasound using graphene diaphragms as a method of communication. Acoustics, the science of sound, starts as far back as Pythagoras in the 6th century BC, echolocation in bats was discovered by Lazzaro Spallanzani in 1794, when he demonstrated that bats hunted and navigated by inaudible sound and not vision. The first technological application of ultrasound was an attempt to detect submarines by Paul Langevin in 1917, the piezoelectric effect, discovered by Jacques and Pierre Curie in 1880, was useful in transducers to generate and detect ultrasonic waves in air and water. Ultrasound is defined by the American National Standards Institute as sound at frequencies greater than 20 kHz, in air at atmospheric pressure ultrasonic waves have wavelengths of 1.9 cm or less. The upper frequency limit in humans is due to limitations of the middle ear, auditory sensation can occur if high‐intensity ultrasound is fed directly into the human skull and reaches the cochlea through bone conduction, without passing through the middle ear. Children can hear some high-pitched sounds that older adults cannot hear, the Mosquito is an electronic device that uses a high pitched frequency to deter loitering by young people. Bats use a variety of ultrasonic ranging techniques to detect their prey and they can detect frequencies beyond 100 kHz, possibly up to 200 kHz. Many insects have good hearing and most of these are nocturnal insects listening for echolocating bats. This includes many groups of moths, beetles, praying mantids, upon hearing a bat, some insects will make evasive manoeuvres to escape being caught. Ultrasonic frequencies trigger an action in the noctuid moth that cause it to drop slightly in its flight to evade attack. Tiger moths also emit clicks which may disturb bats echolocation, dogs and cats hearing range extends into the ultrasound, the top end of a dogs hearing range is about 45 kHz, while a cats is 64 kHz. The wild ancestors of cats and dogs evolved this higher hearing range to hear sounds made by their preferred prey. A dog whistle is a whistle that emits ultrasound, used for training and calling dogs, porpoises have the highest known upper hearing limit, at around 160 kHz
Ultrasound
–
Ultrasound image of a fetus in the womb, viewed at 12 weeks of pregnancy (bidimensional-scan)
Ultrasound
–
An ultrasonic examination
Ultrasound
–
Bats use ultrasounds to navigate in the darkness.
Ultrasound
–
Sonogram of a fetus at 14 weeks (profile)
56.
Number theory
–
Number theory or, in older usage, arithmetic is a branch of pure mathematics devoted primarily to the study of the integers. It is sometimes called The Queen of Mathematics because of its place in the discipline. Number theorists study prime numbers as well as the properties of objects out of integers or defined as generalizations of the integers. Integers can be considered either in themselves or as solutions to equations, questions in number theory are often best understood through the study of analytical objects that encode properties of the integers, primes or other number-theoretic objects in some fashion. One may also study real numbers in relation to rational numbers, the older term for number theory is arithmetic. By the early century, it had been superseded by number theory. The use of the arithmetic for number theory regained some ground in the second half of the 20th century. In particular, arithmetical is preferred as an adjective to number-theoretic. The first historical find of a nature is a fragment of a table. The triples are too many and too large to have been obtained by brute force, the heading over the first column reads, The takiltum of the diagonal which has been subtracted such that the width. The tables layout suggests that it was constructed by means of what amounts, in language, to the identity 2 +1 =2. If some other method was used, the triples were first constructed and then reordered by c / a, presumably for use as a table. It is not known what these applications may have been, or whether there could have any, Babylonian astronomy, for example. It has been suggested instead that the table was a source of examples for school problems. While Babylonian number theory—or what survives of Babylonian mathematics that can be called thus—consists of this single, striking fragment, late Neoplatonic sources state that Pythagoras learned mathematics from the Babylonians. Much earlier sources state that Thales and Pythagoras traveled and studied in Egypt, Euclid IX 21—34 is very probably Pythagorean, it is very simple material, but it is all that is needed to prove that 2 is irrational. Pythagorean mystics gave great importance to the odd and the even, the discovery that 2 is irrational is credited to the early Pythagoreans. This forced a distinction between numbers, on the one hand, and lengths and proportions, on the other hand, the Pythagorean tradition spoke also of so-called polygonal or figurate numbers
Number theory
–
A Lehmer sieve, which is a primitive digital computer once used for finding primes and solving simple Diophantine equations.
Number theory
–
The Plimpton 322 tablet
Number theory
–
Title page of the 1621 edition of Diophantus' Arithmetica, translated into Latin by Claude Gaspard Bachet de Méziriac.
Number theory
–
Leonhard Euler
57.
Meteorology
–
Meteorology is a branch of the atmospheric sciences which includes atmospheric chemistry and atmospheric physics, with a major focus on weather forecasting. The study of meteorology dates back millennia, though significant progress in meteorology did not occur until the 18th century, the 19th century saw modest progress in the field after weather observation networks were formed across broad regions. Prior attempts at prediction of weather depended on historical data, Meteorological phenomena are observable weather events that are explained by the science of meteorology. Different spatial scales are used to describe and predict weather on local, regional, Meteorology, climatology, atmospheric physics, and atmospheric chemistry are sub-disciplines of the atmospheric sciences. Meteorology and hydrology compose the interdisciplinary field of hydrometeorology, the interactions between Earths atmosphere and its oceans are part of a coupled ocean-atmosphere system. Meteorology has application in diverse fields such as the military, energy production, transport, agriculture. The word meteorology is from Greek μετέωρος metéōros lofty, high and -λογία -logia -logy, varāhamihiras classical work Brihatsamhita, written about 500 AD, provides clear evidence that a deep knowledge of atmospheric processes existed even in those times. In 350 BC, Aristotle wrote Meteorology, Aristotle is considered the founder of meteorology. One of the most impressive achievements described in the Meteorology is the description of what is now known as the hydrologic cycle and they are all called swooping bolts because they swoop down upon the Earth. Lightning is sometimes smoky, and is then called smoldering lightning, sometimes it darts quickly along, at other times, it travels in crooked lines, and is called forked lightning. When it swoops down upon some object it is called swooping lightning, the Greek scientist Theophrastus compiled a book on weather forecasting, called the Book of Signs. The work of Theophrastus remained a dominant influence in the study of weather, in 25 AD, Pomponius Mela, a geographer for the Roman Empire, formalized the climatic zone system. According to Toufic Fahd, around the 9th century, Al-Dinawari wrote the Kitab al-Nabat, ptolemy wrote on the atmospheric refraction of light in the context of astronomical observations. St. Roger Bacon was the first to calculate the size of the rainbow. He stated that a rainbow summit can not appear higher than 42 degrees above the horizon, in the late 13th century and early 14th century, Kamāl al-Dīn al-Fārisī and Theodoric of Freiberg were the first to give the correct explanations for the primary rainbow phenomenon. Theoderic went further and also explained the secondary rainbow, in 1716, Edmund Halley suggested that aurorae are caused by magnetic effluvia moving along the Earths magnetic field lines. In 1441, King Sejongs son, Prince Munjong, invented the first standardized rain gauge and these were sent throughout the Joseon Dynasty of Korea as an official tool to assess land taxes based upon a farmers potential harvest. In 1450, Leone Battista Alberti developed a swinging-plate anemometer, and was known as the first anemometer, in 1607, Galileo Galilei constructed a thermoscope
Meteorology
–
Atmospheric sciences
Meteorology
–
Parhelion (sundog) at Savoie
Meteorology
–
Twilight at Baker Beach
Meteorology
–
A hemispherical cup anemometer
58.
Oceanography
–
Oceanography, also known as oceanology, is the study of the physical and the biological aspects of the ocean. Paleoceanography studies the history of the oceans in the geologic past, humans first acquired knowledge of the waves and currents of the seas and oceans in pre-historic times. Observations on tides were recorded by Aristotle and Strabo, early exploration of the oceans was primarily for cartography and mainly limited to its surfaces and of the animals that fishermen brought up in nets, though depth soundings by lead line were taken. Although Juan Ponce de León in 1513 first identified the Gulf Stream, Franklin measured water temperatures during several Atlantic crossings and correctly explained the Gulf Streams cause. Franklin and Timothy Folger printed the first map of the Gulf Stream in 1769-1770, information on the currents of the Pacific Ocean was gathered by explorers of the late 18th century, including James Cook and Louis Antoine de Bougainville. James Rennell wrote the first scientific textbooks on oceanography, detailing the current flows of the Atlantic, during a voyage around the Cape of Good Hope in 1777, he mapped the banks and currents at the Lagullas. He was also the first to understand the nature of the intermittent current near the Isles of Scilly, Robert FitzRoy published a four-volume report of the Beagles three voyages. In 1841–1842 Edward Forbes undertook dredging in the Aegean Sea that founded marine ecology, the first superintendent of the United States Naval Observatory, Matthew Fontaine Maury devoted his time to the study of marine meteorology, navigation, and charting prevailing winds and currents. His 1855 textbook Physical Geography of the Sea was one of the first comprehensive oceanography studies, many nations sent oceanographic observations to Maury at the Naval Observatory, where he and his colleagues evaluated the information and distributed the results worldwide. Despite all this, human knowledge of the oceans remained confined to the topmost few fathoms of the water, almost nothing was known of the ocean depths. The Royal Navys efforts to all of the worlds coastlines in the mid-19th century reinforced the vague idea that most of the ocean was very deep. As exploration ignited both popular and scientific interest in the regions and Africa, so too did the mysteries of the unexplored oceans. The seminal event in the founding of the science of oceanography was the 1872-76 Challenger expedition. As the first true oceanographic cruise, this laid the groundwork for an entire academic. In response to a recommendation from the Royal Society, The British Government announced in 1871 an expedition to explore worlds oceans, charles Wyville Thompson and Sir John Murray launched the Challenger expedition. The Challenger, leased from the Royal Navy, was modified for scientific work, under the scientific supervision of Thomson, Challenger travelled nearly 70,000 nautical miles surveying and exploring. On her journey circumnavigating the globe,492 deep sea soundings,133 bottom dredges,151 open water trawls and 263 serial water temperature observations were taken, around 4,700 new species of marine life were discovered. The result was the Report Of The Scientific Results of the Exploring Voyage of H. M. S, Murray, who supervised the publication, described the report as the greatest advance in the knowledge of our planet since the celebrated discoveries of the fifteenth and sixteenth centuries
Oceanography
–
HMS Challenger undertook the first global marine research expedition in 1872.
Oceanography
–
Thermohaline circulation
Oceanography
–
Ocean currents (1911)
Oceanography
–
Oceanographic Museum Monaco
59.
Physical science
–
Physical science is a branch of natural science that studies non-living systems, in contrast to life science. It in turn has many branches, each referred to as a physical science, in natural science, hypotheses must be verified scientifically to be regarded as scientific theory. Validity, accuracy, and social mechanisms ensuring quality control, such as review and repeatability of findings, are amongst the criteria. Natural science can be broken into two branches, life science, for example biology and physical science. Each of these branches, and all of their sub-branches, are referred to as natural sciences, physics – natural and physical science that involves the study of matter and its motion through space and time, along with related concepts such as energy and force. More broadly, it is the analysis of nature, conducted in order to understand how the universe behaves. Branches of astronomy Chemistry – studies the composition, structure, properties, branches of chemistry Earth science – all-embracing term referring to the fields of science dealing with planet Earth. Earth science is the study of how the natural environment works and it includes the study of the atmosphere, hydrosphere, lithosphere, and biosphere. Branches of Earth science History of physical science – history of the branch of science that studies non-living systems. It in turn has many branches, each referred to as a physical science, however, the term physical creates an unintended, somewhat arbitrary distinction, since many branches of physical science also study biological phenomena. History of astrodynamics – history of the application of ballistics and celestial mechanics to the problems concerning the motion of rockets. History of astrometry – history of the branch of astronomy that involves precise measurements of the positions and movements of stars, History of cosmology – history of the discipline that deals with the nature of the Universe as a whole. History of physical cosmology – history of the study of the largest-scale structures, History of planetary science – history of the scientific study of planets, moons, and planetary systems, in particular those of the Solar System and the processes that form them. History of neurophysics – history of the branch of biophysics dealing with the nervous system, History of chemical physics – history of the branch of physics that studies chemical processes from the point of view of physics. History of computational physics – history of the study and implementation of algorithms to solve problems in physics for which a quantitative theory already exists. History of condensed matter physics – history of the study of the properties of condensed phases of matter. History of cryogenics – history of the cryogenics is the study of the production of low temperature. History of biomechanics – history of the study of the structure and function of biological systems such as humans, animals, plants, organs, History of fluid mechanics – history of the study of fluids and the forces on them
Physical science
–
Chemistry, the central science, partial ordering of the sciences proposed by Balaban and Klein.
60.
Architecture
–
Architecture is both the process and the product of planning, designing, and constructing buildings and other physical structures. Architectural works, in the form of buildings, are often perceived as cultural symbols. Historical civilizations are often identified with their surviving architectural achievements, Architecture can mean, A general term to describe buildings and other physical structures. The art and science of designing buildings and nonbuilding structures, the style of design and method of construction of buildings and other physical structures. A unifying or coherent form or structure Knowledge of art, science, technology, the design activity of the architect, from the macro-level to the micro-level. The practice of the architect, where architecture means offering or rendering services in connection with the design and construction of buildings. The earliest surviving work on the subject of architecture is De architectura. According to Vitruvius, a building should satisfy the three principles of firmitas, utilitas, venustas, commonly known by the original translation – firmness, commodity. An equivalent in modern English would be, Durability – a building should stand up robustly, utility – it should be suitable for the purposes for which it is used. Beauty – it should be aesthetically pleasing, according to Vitruvius, the architect should strive to fulfill each of these three attributes as well as possible. Leon Battista Alberti, who elaborates on the ideas of Vitruvius in his treatise, De Re Aedificatoria, saw beauty primarily as a matter of proportion, for Alberti, the rules of proportion were those that governed the idealised human figure, the Golden mean. The most important aspect of beauty was, therefore, an inherent part of an object, rather than something applied superficially, Gothic architecture, Pugin believed, was the only true Christian form of architecture. The 19th-century English art critic, John Ruskin, in his Seven Lamps of Architecture, Architecture was the art which so disposes and adorns the edifices raised by men. That the sight of them contributes to his health, power. For Ruskin, the aesthetic was of overriding significance and his work goes on to state that a building is not truly a work of architecture unless it is in some way adorned. For Ruskin, a well-constructed, well-proportioned, functional building needed string courses or rustication, but suddenly you touch my heart, you do me good. I am happy and I say, This is beautiful, le Corbusiers contemporary Ludwig Mies van der Rohe said Architecture starts when you carefully put two bricks together. The notable 19th-century architect of skyscrapers, Louis Sullivan, promoted an overriding precept to architectural design, function came to be seen as encompassing all criteria of the use, perception and enjoyment of a building, not only practical but also aesthetic, psychological and cultural
Architecture
–
Brunelleschi, in the building of the dome of Florence Cathedral in the early 15th-century, not only transformed the building and the city, but also the role and status of the architect.
Architecture
–
Section of Brunelleschi 's dome drawn by the architect Cigoli (c. 1600)
Architecture
–
The Parthenon, Athens, Greece, "the supreme example among architectural sites." (Fletcher).
Architecture
–
The Houses of Parliament, Westminster, master-planned by Charles Barry, with interiors and details by A.W.N. Pugin
61.
Cartography
–
Cartography is the study and practice of making maps. Combining science, aesthetics, and technique, cartography builds on the premise that reality can be modeled in ways that communicate spatial information effectively, the fundamental problems of traditional cartography are to, Set the maps agenda and select traits of the object to be mapped. This is the concern of map editing, traits may be physical, such as roads or land masses, or may be abstract, such as toponyms or political boundaries. Represent the terrain of the object on flat media. This is the concern of map projections, eliminate characteristics of the mapped object that are not relevant to the maps purpose. This is the concern of generalization, reduce the complexity of the characteristics that will be mapped. This is also the concern of generalization, orchestrate the elements of the map to best convey its message to its audience. This is the concern of map design, modern cartography constitutes many theoretical and practical foundations of geographic information systems. The earliest known map is a matter of debate, both because the term map isnt well-defined and because some artifacts that might be maps might actually be something else. A wall painting that might depict the ancient Anatolian city of Çatalhöyük has been dated to the late 7th millennium BCE, the oldest surviving world maps are from 9th century BCE Babylonia. One shows Babylon on the Euphrates, surrounded by Assyria, Urartu and several cities, all, in turn, another depicts Babylon as being north of the world center. The ancient Greeks and Romans created maps since Anaximander in the 6th century BCE, in the 2nd century AD, Ptolemy wrote his treatise on cartography, Geographia. This contained Ptolemys world map – the world known to Western society. As early as the 8th century, Arab scholars were translating the works of the Greek geographers into Arabic, in ancient China, geographical literature dates to the 5th century BCE. The oldest extant Chinese maps come from the State of Qin, dated back to the 4th century BCE, in the book of the Xin Yi Xiang Fa Yao, published in 1092 by the Chinese scientist Su Song, a star map on the equidistant cylindrical projection. Early forms of cartography of India included depictions of the pole star and these charts may have been used for navigation. Mappa mundi are the Medieval European maps of the world, approximately 1,100 mappae mundi are known to have survived from the Middle Ages. Of these, some 900 are found illustrating manuscripts and the remainder exist as stand-alone documents, the Arab geographer Muhammad al-Idrisi produced his medieval atlas Tabula Rogeriana in 1154
Cartography
–
A medieval depiction of the Ecumene (1482, Johannes Schnitzer, engraver), constructed after the coordinates in Ptolemy's Geography and using his second map projection. The translation into Latin and dissemination of Geography in Europe, in the beginning of the 15th century, marked the rebirth of scientific cartography, after more than a millennium of stagnation.
Cartography
–
Valcamonica rock art (I), Paspardo r. 29, topographic composition, 4th millennium BC
Cartography
–
The Bedolina Map and its tracing, 6th–4th century BC
Cartography
–
Copy (1472) of St. Isidore's TO map of the world.
62.
Game development
–
Video game development is the process of creating a video game. Development is undertaken by a developer, which may range from one person to a large business. Traditional commercial PC and console games are funded by a publisher. Indie games can take time and can be produced cheaply by individuals. The indie game industry has seen a rise in recent years with the growth of new distribution systems. The first video games were developed in the 1960s, but required mainframe computers and were not available to the general public, commercial game development began in the 1970s with the advent of first-generation video game consoles and home computers. Due to low costs and low capabilities of computers, a programmer could develop a full game. The average cost of producing a video game rose from US$1–4 million in 2000 to over $5 million in 2006. Mainstream PC and console games are developed in phases. First, in pre-production, pitches, prototypes, and game design documents are written, if the idea is approved and the developer receives funding, a full-scale development begins. This usually involves a team of 20–100 individuals with various responsibilities, including designers, artists, programmers, Game development is the software development process by which a video game is produced. Games are developed as an outlet and to generate profit. Development is normally funded by a publisher, well-made games bring profit more readily. However, it is important to estimate a games financial requirements, failing to provide clear implications of games expectations may result in exceeding allocated budget. In fact, the majority of games do not produce profit. Most developers cannot afford changing development schedule and require estimating their capabilities with available resources before production, the game industry requires innovations, as publishers cannot profit from constant release of repetitive sequels and imitations. Every year new independent development companies open and some manage to develop hit titles, similarly, many developers close down because they cannot find a publishing contract or their production is not profitable. It is difficult to start a new due to high initial investment required
Game development
–
The XGS PIC 16-Bit game development board, a game development tool similar to those used in the 1990s.
63.
Identity (mathematics)
–
In other words, A = B is an identity if A and B define the same functions. This means that an identity is an equality between functions that are differently defined, for example,2 = a2 + 2ab + b2 and cos2 + sin2 =1 are identities. Identities are sometimes indicated by the triple bar symbol ≡ instead of =, geometrically, these are identities involving certain functions of one or more angles. They are distinct from triangle identities, which are identities involving both angles and side lengths of a triangle, only the former are covered in this article. These identities are useful whenever expressions involving trigonometric functions need to be simplified, for example, the latter equation is true when θ =0, false when θ =2. The following identities hold for all integer exponents, provided that the base is non-zero and this contrasts with addition and multiplication, which are. For example,2 +3 =3 +2 =5 and 2 ·3 =3 ·2 =6, but 23 =8, whereas 32 =9. For example, +4 =2 + =9 and ·4 =2 · =24, but 23 to the 4 is 84 or 4,096, whereas 2 to the 34 is 281 or 2,417,851,639,229,258,349,412,352. Without parentheses to modify the order of calculation, by convention the order is top-down, not bottom-up, several important formulas, sometimes called logarithmic identities or log laws, relate logarithms to one another. The logarithm of a product is the sum of the logarithms of the numbers being multiplied, the logarithm of the p-th power of a number is p times the logarithm of the number itself, the logarithm of a p-th root is the logarithm of the number divided by p. The following table lists these identities with examples, each of the identities can be derived after substitution of the logarithm definitions x = blogb, and/or y = blogb, in the left hand sides. The logarithm logb can be computed from the logarithms of x and b with respect to an arbitrary base k using the following formula, typical scientific calculators calculate the logarithms to bases 10 and e. Logarithms with respect to any base b can be determined using either of these two logarithms by the formula, log b = log 10 log 10 = log e log e . Given a number x and its logarithm logb to a base b. The hyperbolic functions satisfy many identities, all of similar in form to the trigonometric identities. The Gudermannian function gives a relationship between the circular functions and the hyperbolic ones that does not involve complex numbers. Accounting identity List of mathematical identities Encyclopedia of Equation Online encyclopedia of mathematical identities A Collection of Algebraic Identities
Identity (mathematics)
–
Visual proof of the Pythagorean identity. For any angle θ, The point (cos(θ),sin(θ)) lies on the unit circle, which satisfies the equation x 2 + y 2 =1. Thus, cos 2 (θ)+sin 2 (θ)=1.
64.
Heron's formula
–
Herons formula states that the area of a triangle whose sides have lengths a, b, and c is A = s, where s is the semiperimeter of the triangle, that is, s = a + b + c 2. Herons formula can also be written as A =14 A =142 − A =142 −2 A =144 a 2 b 2 −2. Let △ABC be the triangle sides a =4, b =13. The semiperimeter is s = 1/2 = 1/2 =16, in this example, the side lengths and area are all integers, making it a Heronian triangle. However, Herons formula works well in cases where one or all of these numbers is not an integer. The formula is credited to Heron of Alexandria, and a proof can be found in his book, Metrica, written c. A formula equivalent to Herons, namely A =12 a 2 c 2 −2 and it was published in Shushu Jiuzhang, written by Qin Jiushao and published in 1247. Herons original proof made use of quadrilaterals, while other arguments appeal to trigonometry as below, or to the incenter. A modern proof, which uses algebra and is quite unlike the one provided by Heron, let a, b, c be the sides of the triangle and α, β, γ the angles opposite those sides. The difference of two squares factorization was used in two different steps, the following proof is very similar to one given by Raifaizen. By the Pythagorean theorem we have b2 = h2 + d2, subtracting these yields a2 − b2 = c2 − 2cd. Herons formula as given above is numerically unstable for triangles with a small angle when using floating point arithmetic. A stable alternative involves arranging the lengths of the sides so that a ≥ b ≥ c, the brackets in the above formula are required in order to prevent numerical instability in the evaluation. Three other area formulas have the structure as Herons formula but are expressed in terms of different variables. First, denoting the medians from sides a, b, and c respectively as ma, mb, Herons formula is a special case of Brahmaguptas formula for the area of a cyclic quadrilateral. Herons formula and Brahmaguptas formula are both special cases of Bretschneiders formula for the area of a quadrilateral, Herons formula can be obtained from Brahmaguptas formula or Bretschneiders formula by setting one of the sides of the quadrilateral to zero. Herons formula is also a case of the formula for the area of a trapezoid or trapezium based only on its sides. Herons formula is obtained by setting the smaller parallel side to zero, another generalization of Herons formula to pentagons and hexagons inscribed in a circle was discovered by David P. Robbins
Heron's formula
–
A triangle with sides a, b, and c.
65.
Circumcircle
–
In geometry, the circumscribed circle or circumcircle of a polygon is a circle which passes through all the vertices of the polygon. The center of circle is called the circumcenter and its radius is called the circumradius. A polygon which has a circle is called a cyclic polygon. All regular simple polygons, all isosceles trapezoids, all triangles, a related notion is the one of a minimum bounding circle, which is the smallest circle that completely contains the polygon within it. All triangles are cyclic, i. e. every triangle has a circumscribed circle and this can be proven on the grounds that the general equation for a circle with center and radius r in the Cartesian coordinate system is 2 +2 = r 2. Since this equation has three parameters only three points coordinate pairs are required to determine the equation of a circle, since a triangle is defined by its three vertices, and exactly three points are required to determine a circle, every triangle can be circumscribed. The circumcenter of a triangle can be constructed by drawing any two of the three perpendicular bisectors, the center is the point where the perpendicular bisectors intersect, and the radius is the length to any of the three vertices. This is because the circumcenter is equidistant from any pair of the triangles vertices, in coastal navigation, a triangles circumcircle is sometimes used as a way of obtaining a position line using a sextant when no compass is available. The horizontal angle between two landmarks defines the circumcircle upon which the observer lies, in the Euclidean plane, it is possible to give explicitly an equation of the circumcircle in terms of the Cartesian coordinates of the vertices of the inscribed triangle. Suppose that A = B = C = are the coordinates of points A, B, using the polarization identity, these equations reduce to the condition that the matrix has a nonzero kernel. Thus the circumcircle may alternatively be described as the locus of zeros of the determinant of this matrix, a similar approach allows one to deduce the equation of the circumsphere of a tetrahedron. A unit vector perpendicular to the containing the circle is given by n ^ = × | × |. An equation for the circumcircle in trilinear coordinates x, y, z is a/x + b/y + c/z =0, an equation for the circumcircle in barycentric coordinates x, y, z is a2/x + b2/y + c2/z =0. The isogonal conjugate of the circumcircle is the line at infinity, given in coordinates by ax + by + cz =0. Additionally, the circumcircle of a triangle embedded in d dimensions can be using a generalized method. Let A, B, and C be d-dimensional points, which form the vertices of a triangle and we start by transposing the system to place C at the origin, a = A − C, b = B − C. The circumcenter, p0, is given by p 0 = ×2 ∥ a × b ∥2 + C, the Cartesian coordinates of the circumcenter are U x =1 D U y =1 D with D =2. Without loss of generality this can be expressed in a form after translation of the vertex A to the origin of the Cartesian coordinate systems
Circumcircle
–
Circumscribed circle, C, and circumcenter, O, of a cyclic polygon, P
66.
Imaginary unit
–
The imaginary unit or unit imaginary number is a solution to the quadratic equation x2 +1 =0. The term imaginary is used there is no real number having a negative square. There are two square roots of −1, namely i and −i, just as there are two complex square roots of every real number other than zero, which has one double square root. In contexts where i is ambiguous or problematic, j or the Greek ι is sometimes used, in the disciplines of electrical engineering and control systems engineering, the imaginary unit is normally denoted by j instead of i, because i is commonly used to denote electric current. For the history of the unit, see Complex number § History. The imaginary number i is defined solely by the property that its square is −1, with i defined this way, it follows directly from algebra that i and −i are both square roots of −1. In polar form, i is represented as 1eiπ/2, having a value of 1. In the complex plane, i is the point located one unit from the origin along the imaginary axis, more precisely, once a solution i of the equation has been fixed, the value −i, which is distinct from i, is also a solution. Since the equation is the definition of i, it appears that the definition is ambiguous. However, no ambiguity results as long as one or other of the solutions is chosen and labelled as i and this is because, although −i and i are not quantitatively equivalent, there is no algebraic difference between i and −i. Both imaginary numbers have equal claim to being the number whose square is −1, the issue can be a subtle one. See also Complex conjugate and Galois group, a more precise explanation is to say that the automorphism group of the special orthogonal group SO has exactly two elements — the identity and the automorphism which exchanges CW and CCW rotations. All these ambiguities can be solved by adopting a rigorous definition of complex number. For example, the pair, in the usual construction of the complex numbers with two-dimensional vectors. The imaginary unit is sometimes written √−1 in advanced mathematics contexts, however, great care needs to be taken when manipulating formulas involving radicals. The radical sign notation is reserved either for the square root function. Similarly,1 i =1 −1 =1 −1 = −11 = −1 = i, the calculation rules a ⋅ b = a ⋅ b and a b = a b are only valid for real, non-negative values of a and b. These problems are avoided by writing and manipulating expressions like i√7, for a more thorough discussion, see Square root and Branch point
Imaginary unit
–
i in the complex or cartesian plane. Real numbers lie on the horizontal axis, and imaginary numbers lie on the vertical axis
67.
Skinny triangle
–
A skinny triangle in trigonometry is a triangle whose height is much greater than its base. The solution of triangles can be greatly simplified by using the approximation that the sine of a small angle is equal to the angle in radians. The solution is simple for skinny triangles that are also isosceles or right triangles. The skinny triangle finds uses in surveying, astronomy and shooting, the proof of the skinny triangle solution follows from the small-angle approximation by applying the law of sines. Applying the small angle approximations to the law of sines above results in and this result is equivalent to assuming that the length of the base of the triangle is equal to the length of the arc of circle of radius r subtended by angle θ. This approximation becomes more accurate for smaller and smaller θ. The error is 10% or less for angles less than about 43°, the error of this approximation is less than 10% for angles 31° or less. Applications of the skinny triangle occur in any situation where the distance to a far object is to be determined and this can occur in surveying, astronomy, and also has military applications. The skinny triangle is used in astronomy to measure the distance to solar system objects. The base of the triangle is formed by the distance between two measuring stations and the angle θ is the angle formed by the object as seen by the two stations. This baseline is usually very long for best accuracy, in principle the stations could be on opposite sides of the Earth. However, this distance is short compared to the distance to the object being measured. The alternative method of measuring the angles is theoretically possible. The base angles are very nearly right angles and would need to be measured with much greater precision than the angle in order to get the same accuracy. The same method of measuring angles and applying the skinny triangle can be used to measure the distances to stars. In the case of stars however, a longer baseline than the diameter of the Earth is usually required, instead of using two stations on the baseline, two measurements are made from the same station at different times of year. During the intervening period, the orbit of the Earth around the Sun moves the station a great distance. This baseline can be as long as the axis of the Earths orbit or, equivalently
Skinny triangle
–
Fig.2 Length of arc l approaches length of chord b as angle θ decreases
68.
Oxford University Press US
–
Oxford University Press is the largest university press in the world, and the second oldest after Cambridge University Press. It is a department of the University of Oxford and is governed by a group of 15 academics appointed by the known as the delegates of the press. They are headed by the secretary to the delegates, who serves as OUPs chief executive, Oxford University has used a similar system to oversee OUP since the 17th century. The university became involved in the print trade around 1480, and grew into a printer of Bibles, prayer books. OUP took on the project became the Oxford English Dictionary in the late 19th century. Moves into international markets led to OUP opening its own offices outside the United Kingdom, by contracting out its printing and binding operations, the modern OUP publishes some 6,000 new titles around the world each year. OUP was first exempted from United States corporation tax in 1972, as a department of a charity, OUP is exempt from income tax and corporate tax in most countries, but may pay sales and other commercial taxes on its products. The OUP today transfers 30% of its surplus to the rest of the university. OUP is the largest university press in the world by the number of publications, publishing more than 6,000 new books every year, the Oxford University Press Museum is located on Great Clarendon Street, Oxford. Visits must be booked in advance and are led by a member of the archive staff, displays include a 19th-century printing press, the OUP buildings, and the printing and history of the Oxford Almanack, Alice in Wonderland and the Oxford English Dictionary. The first printer associated with Oxford University was Theoderic Rood, the first book printed in Oxford, in 1478, an edition of Rufinuss Expositio in symbolum apostolorum, was printed by another, anonymous, printer. Famously, this was mis-dated in Roman numerals as 1468, thus apparently pre-dating Caxton, roods printing included John Ankywylls Compendium totius grammaticae, which set new standards for teaching of Latin grammar. After Rood, printing connected with the university remained sporadic for over half a century, the chancellor, Robert Dudley, 1st Earl of Leicester, pleaded Oxfords case. Some royal assent was obtained, since the printer Joseph Barnes began work, Oxfords chancellor, Archbishop William Laud, consolidated the legal status of the universitys printing in the 1630s. Laud envisaged a unified press of world repute, Oxford would establish it on university property, govern its operations, employ its staff, determine its printed work, and benefit from its proceeds. To that end, he petitioned Charles I for rights that would enable Oxford to compete with the Stationers Company and the Kings Printer and these were brought together in Oxfords Great Charter in 1636, which gave the university the right to print all manner of books. Laud also obtained the privilege from the Crown of printing the King James or Authorized Version of Scripture at Oxford and this privilege created substantial returns in the next 250 years, although initially it was held in abeyance. The Stationers Company was deeply alarmed by the threat to its trade, under this, the Stationers paid an annual rent for the university not to exercise its full printing rights – money Oxford used to purchase new printing equipment for smaller purposes
Oxford University Press US
–
Oxford University Press on Walton Street.
Oxford University Press US
–
Oxford University Press
Oxford University Press US
–
2008 conference booth
69.
Encyclopedia of Mathematics
–
The Encyclopedia of Mathematics is a large reference work in mathematics. It is available in form and on CD-ROM. The 2002 version contains more than 8,000 entries covering most areas of mathematics at a level. The encyclopedia is edited by Michiel Hazewinkel and was published by Kluwer Academic Publishers until 2003, the CD-ROM contains animations and three-dimensional objects. Until November 29,2011, a version of the encyclopedia could be browsed online free of charge online This URL now redirects to the new wiki incarnation of the EOM. A new dynamic version of the encyclopedia is now available as a public wiki online and this new wiki is a collaboration between Springer and the European Mathematical Society. This new version of the encyclopedia includes the entire contents of the online version. All entries will be monitored for content accuracy by members of a board selected by the European Mathematical Society. Vinogradov, I. M. Matematicheskaya entsiklopediya, Moscow, Sov, Hazewinkel, M. Encyclopaedia of Mathematics, Kluwer,1994. Hazewinkel, M. Encyclopaedia of Mathematics, Vol.1, Hazewinkel, M. Encyclopaedia of Mathematics, Vol.2, Kluwer,1988. Hazewinkel, M. Encyclopaedia of Mathematics, Vol.3, Hazewinkel, M. Encyclopaedia of Mathematics, Vol.4, Kluwer,1989. Hazewinkel, M. Encyclopaedia of Mathematics, Vol.5, Hazewinkel, M. Encyclopaedia of Mathematics, Vol.6, Kluwer,1990. Hazewinkel, M. Encyclopaedia of Mathematics, Vol.7, Hazewinkel, M. Encyclopaedia of Mathematics, Vol.8, Kluwer,1992. Hazewinkel, M. Encyclopaedia of Mathematics, Vol.9, Hazewinkel, M. Encyclopaedia of Mathematics, Vol.10, Kluwer,1994. Hazewinkel, M. Encyclopaedia of Mathematics, Supplement I, Kluwer,1997, Hazewinkel, M. Encyclopaedia of Mathematics, Supplement II, Kluwer,2000. Hazewinkel, M. Encyclopaedia of Mathematics, Supplement III, Kluwer,2002, Hazewinkel, M. Encyclopaedia of Mathematics on CD-ROM, Kluwer,1998. Encyclopedia of Mathematics, public wiki monitored by a board under the management of the European Mathematical Society. List of online encyclopedias Current page of M. Hazewinkel Online Encyclopedia of Mathematics
Encyclopedia of Mathematics
–
Encyclopedia of Mathematics snap shot
Encyclopedia of Mathematics
–
A complete set of Encyclopedia of Mathematics at a university library.
70.
Clark University
–
Clark University is an American private research university located in Worcester, Massachusetts, the second largest city in New England. It is adjacent to University Park about 50 miles west of Boston, founded in 1887 with a large endowment from its namesake Jonas Gilman Clark, a prominent businessman, Clark was one of the first modern research universities in the United States. Originally an all-graduate institution, Clarks first undergraduates entered in 1902. S, News & World Report and as one of 40 Colleges That Change Lives. The university competes intercollegiately in 17 NCAA Division III varsity sports as the Clark Cougars and is a part of the New England Womens and Mens Athletic Conference, intramural and club sports are also offered in a wide range of activities. Clark was ranked no.27 on the U. S. News list of Best Value Schools, the university is also the alma mater of at least three living billionaires, in addition to its alumni having won two Pulitzer Prizes and an Emmy Award. An Act of Incorporation was duly enacted by the legislature and signed by the governor on March 31 of that same year. Opening on October 2,1889, Clark was the first all-graduate university in the United States, with departments in mathematics, physics, chemistry, biology, G. Stanley Hall was appointed the first president of Clark University in 1888. He had been a professor of psychology and pedagogy at Johns Hopkins University, Hall spent seven months in Europe visiting other universities and recruiting faculty. He became the founder of the American Psychological Association and earned the first Ph. D. in psychology in the United States at Harvard, Clark has played a prominent role in the development of psychology as a distinguished discipline in the United States ever since. This had been opposed by President Hall in years past but Clark College opened in 1902. Clark College and Clark University had different presidents until Halls retirement in 1920, Clark University began admitting women after Clarks death, and the first female Ph. D. in psychology was awarded in 1908. Early Ph. D. students in psychology were ethnically diverse, in 1920, Francis Sumner became the first African American to earn a Ph. D. in psychology. Clark withdrew its membership in 1999, citing a conflict with its mission, in order to celebrate the 20th anniversary of Clarks opening, President Hall invited a number of leading thinkers to the University. This was Freuds only set of lectures in the United States, in the 1920s Robert Goddard, a pioneer of rocketry, considered one of the founders of space and missile technology, served as a professor and chairman of the Physics Department. On November 23,1929, noted aviator Charles Lindbergh visited campus, the Robert H. Goddard Library, a distinctive modern building in the brutalist style by architect John M. Johansen, was completed in 1969. In 1963, student DArmy Bailey invited Malcolm X to campus to speak and he delivered a speech in Atwood Hall. On March 15,1968, The Jimi Hendrix Experience performed at Clark University as part of the bands American tour in support of Axis, the Experience played in the Atwood Hall, which could accommodate more than six hundred students. Tickets for the concerts, which sold out, were priced, with seats priced at $3.00, $3.50
Clark University
–
Group photo 1909 in front of Clark University. Front row: Sigmund Freud, G. Stanley Hall, Carl Jung; back row: Abraham A. Brill, Ernest Jones, Sándor Ferenczi.
Clark University
–
Clark University
Clark University
–
Main façade of Jonas Clark Hall, the main academic facility for undergraduate students.
Clark University
–
The Traina Center for the Arts is located in the former Downing Street School.
71.
Algebra
–
Algebra is one of the broad parts of mathematics, together with number theory, geometry and analysis. In its most general form, algebra is the study of mathematical symbols, as such, it includes everything from elementary equation solving to the study of abstractions such as groups, rings, and fields. The more basic parts of algebra are called elementary algebra, the abstract parts are called abstract algebra or modern algebra. Elementary algebra is generally considered to be essential for any study of mathematics, science, or engineering, as well as such applications as medicine, abstract algebra is a major area in advanced mathematics, studied primarily by professional mathematicians. Elementary algebra differs from arithmetic in the use of abstractions, such as using letters to stand for numbers that are unknown or allowed to take on many values. For example, in x +2 =5 the letter x is unknown, in E = mc2, the letters E and m are variables, and the letter c is a constant, the speed of light in a vacuum. Algebra gives methods for solving equations and expressing formulas that are easier than the older method of writing everything out in words. The word algebra is used in certain specialized ways. A special kind of object in abstract algebra is called an algebra. A mathematician who does research in algebra is called an algebraist, the word algebra comes from the Arabic الجبر from the title of the book Ilm al-jabr wal-muḳābala by Persian mathematician and astronomer al-Khwarizmi. The word entered the English language during the century, from either Spanish, Italian. It originally referred to the procedure of setting broken or dislocated bones. The mathematical meaning was first recorded in the sixteenth century, the word algebra has several related meanings in mathematics, as a single word or with qualifiers. As a single word without an article, algebra names a broad part of mathematics, as a single word with an article or in plural, an algebra or algebras denotes a specific mathematical structure, whose precise definition depends on the author. Usually the structure has an addition, multiplication, and a scalar multiplication, when some authors use the term algebra, they make a subset of the following additional assumptions, associative, commutative, unital, and/or finite-dimensional. In universal algebra, the word refers to a generalization of the above concept. With a qualifier, there is the distinction, Without an article, it means a part of algebra, such as linear algebra, elementary algebra. With an article, it means an instance of some abstract structure, like a Lie algebra, sometimes both meanings exist for the same qualifier, as in the sentence, Commutative algebra is the study of commutative rings, which are commutative algebras over the integers
Algebra
–
A page from Al-Khwārizmī 's al-Kitāb al-muḫtaṣar fī ḥisāb al-ğabr wa-l-muqābala
Algebra
–
Italian mathematician Girolamo Cardano published the solutions to the cubic and quartic equations in his 1545 book Ars magna.
72.
Arithmetic
–
Arithmetic is a branch of mathematics that consists of the study of numbers, especially the properties of the traditional operations between them—addition, subtraction, multiplication and division. Arithmetic is an part of number theory, and number theory is considered to be one of the top-level divisions of modern mathematics, along with algebra, geometry. The terms arithmetic and higher arithmetic were used until the beginning of the 20th century as synonyms for number theory and are still used to refer to a wider part of number theory. The earliest written records indicate the Egyptians and Babylonians used all the elementary arithmetic operations as early as 2000 BC and these artifacts do not always reveal the specific process used for solving problems, but the characteristics of the particular numeral system strongly influence the complexity of the methods. The hieroglyphic system for Egyptian numerals, like the later Roman numerals, in both cases, this origin resulted in values that used a decimal base but did not include positional notation. Complex calculations with Roman numerals required the assistance of a board or the Roman abacus to obtain the results. Early number systems that included positional notation were not decimal, including the system for Babylonian numerals. Because of this concept, the ability to reuse the same digits for different values contributed to simpler. The continuous historical development of modern arithmetic starts with the Hellenistic civilization of ancient Greece, prior to the works of Euclid around 300 BC, Greek studies in mathematics overlapped with philosophical and mystical beliefs. For example, Nicomachus summarized the viewpoint of the earlier Pythagorean approach to numbers, Greek numerals were used by Archimedes, Diophantus and others in a positional notation not very different from ours. Because the ancient Greeks lacked a symbol for zero, they used three separate sets of symbols, one set for the units place, one for the tens place, and one for the hundreds. Then for the place they would reuse the symbols for the units place. Their addition algorithm was identical to ours, and their multiplication algorithm was very slightly different. Their long division algorithm was the same, and the square root algorithm that was taught in school was known to Archimedes. He preferred it to Heros method of successive approximation because, once computed, a digit doesnt change, and the square roots of perfect squares, such as 7485696, terminate immediately as 2736. For numbers with a part, such as 546.934. The ancient Chinese used a positional notation. Because they also lacked a symbol for zero, they had one set of symbols for the place
Arithmetic
–
Arithmetic tables for children, Lausanne, 1835
Arithmetic
–
A scale calibrated in imperial units with an associated cost display.
73.
Category theory
–
Category theory formalizes mathematical structure and its concepts in terms of a collection of objects and of arrows. A category has two properties, the ability to compose the arrows associatively and the existence of an identity arrow for each object. The language of category theory has been used to formalize concepts of other high-level abstractions such as sets, rings, several terms used in category theory, including the term morphism, are used differently from their uses in the rest of mathematics. In category theory, morphisms obey conditions specific to category theory itself, Category theory has practical applications in programming language theory, in particular for the study of monads in functional programming. Categories represent abstraction of other mathematical concepts, many areas of mathematics can be formalised by category theory as categories. Hence category theory uses abstraction to make it possible to state and prove many intricate, a basic example of a category is the category of sets, where the objects are sets and the arrows are functions from one set to another. However, the objects of a category need not be sets, any way of formalising a mathematical concept such that it meets the basic conditions on the behaviour of objects and arrows is a valid category—and all the results of category theory apply to it. The arrows of category theory are said to represent a process connecting two objects, or in many cases a structure-preserving transformation connecting two objects. There are, however, many applications where more abstract concepts are represented by objects. The most important property of the arrows is that they can be composed, in other words, linear algebra can also be expressed in terms of categories of matrices. A systematic study of category theory allows us to prove general results about any of these types of mathematical structures from the axioms of a category. The class Grp of groups consists of all objects having a group structure, one can proceed to prove theorems about groups by making logical deductions from the set of axioms. For example, it is immediately proven from the axioms that the identity element of a group is unique, in the case of groups, the morphisms are the group homomorphisms. The study of group homomorphisms then provides a tool for studying properties of groups. Not all categories arise as structure preserving functions, however, the example is the category of homotopies between pointed topological spaces. If one axiomatizes relations instead of functions, one obtains the theory of allegories, a category is itself a type of mathematical structure, so we can look for processes which preserve this structure in some sense, such a process is called a functor. Diagram chasing is a method of arguing with abstract arrows joined in diagrams. Functors are represented by arrows between categories, subject to specific defining commutativity conditions, functors can define categorical diagrams and sequences
Category theory
–
Schematic representation of a category with objects X, Y, Z and morphisms f, g, g ∘ f. (The category's three identity morphisms 1 X, 1 Y and 1 Z, if explicitly represented, would appear as three arrows, next to the letters X, Y, and Z, respectively, each having as its "shaft" a circular arc measuring almost 360 degrees.)
74.
Combinatorics
–
Combinatorics is a branch of mathematics concerning the study of finite or countable discrete structures. Many combinatorial questions have historically been considered in isolation, giving an ad hoc solution to a problem arising in some mathematical context. In the later twentieth century, however, powerful and general methods were developed. One of the oldest and most accessible parts of combinatorics is graph theory, Combinatorics is used frequently in computer science to obtain formulas and estimates in the analysis of algorithms. A mathematician who studies combinatorics is called a combinatorialist or a combinatorist, basic combinatorial concepts and enumerative results appeared throughout the ancient world. Greek historian Plutarch discusses an argument between Chrysippus and Hipparchus of a rather delicate enumerative problem, which was shown to be related to Schröder–Hipparchus numbers. In the Ostomachion, Archimedes considers a tiling puzzle, in the Middle Ages, combinatorics continued to be studied, largely outside of the European civilization. The Indian mathematician Mahāvīra provided formulae for the number of permutations and combinations, later, in Medieval England, campanology provided examples of what is now known as Hamiltonian cycles in certain Cayley graphs on permutations. During the Renaissance, together with the rest of mathematics and the sciences, works of Pascal, Newton, Jacob Bernoulli and Euler became foundational in the emerging field. In modern times, the works of J. J. Sylvester and Percy MacMahon helped lay the foundation for enumerative, graph theory also enjoyed an explosion of interest at the same time, especially in connection with the four color problem. In the second half of the 20th century, combinatorics enjoyed a rapid growth, in part, the growth was spurred by new connections and applications to other fields, ranging from algebra to probability, from functional analysis to number theory, etc. These connections shed the boundaries between combinatorics and parts of mathematics and theoretical science, but at the same time led to a partial fragmentation of the field. Enumerative combinatorics is the most classical area of combinatorics and concentrates on counting the number of combinatorial objects. Although counting the number of elements in a set is a rather broad mathematical problem, fibonacci numbers is the basic example of a problem in enumerative combinatorics. The twelvefold way provides a framework for counting permutations, combinations and partitions. Analytic combinatorics concerns the enumeration of combinatorial structures using tools from complex analysis, in contrast with enumerative combinatorics, which uses explicit combinatorial formulae and generating functions to describe the results, analytic combinatorics aims at obtaining asymptotic formulae. Partition theory studies various enumeration and asymptotic problems related to integer partitions, originally a part of number theory and analysis, it is now considered a part of combinatorics or an independent field. It incorporates the bijective approach and various tools in analysis and analytic number theory, graphs are basic objects in combinatorics
Combinatorics
–
An example of change ringing (with six bells), one of the earliest nontrivial results in Graph Theory.
75.
Group theory
–
In mathematics and abstract algebra, group theory studies the algebraic structures known as groups. Groups recur throughout mathematics, and the methods of group theory have influenced many parts of algebra, linear algebraic groups and Lie groups are two branches of group theory that have experienced advances and have become subject areas in their own right. Various physical systems, such as crystals and the hydrogen atom, thus group theory and the closely related representation theory have many important applications in physics, chemistry, and materials science. Group theory is central to public key cryptography. The first class of groups to undergo a systematic study was permutation groups, given any set X and a collection G of bijections of X into itself that is closed under compositions and inverses, G is a group acting on X. If X consists of n elements and G consists of all permutations, G is the symmetric group Sn, in general, an early construction due to Cayley exhibited any group as a permutation group, acting on itself by means of the left regular representation. In many cases, the structure of a group can be studied using the properties of its action on the corresponding set. For example, in this way one proves that for n ≥5 and this fact plays a key role in the impossibility of solving a general algebraic equation of degree n ≥5 in radicals. The next important class of groups is given by matrix groups, here G is a set consisting of invertible matrices of given order n over a field K that is closed under the products and inverses. Such a group acts on the vector space Kn by linear transformations. In the case of groups, X is a set, for matrix groups. The concept of a group is closely related with the concept of a symmetry group. The theory of groups forms a bridge connecting group theory with differential geometry. A long line of research, originating with Lie and Klein, the groups themselves may be discrete or continuous. Most groups considered in the first stage of the development of group theory were concrete, having been realized through numbers, permutations, or matrices. It was not until the nineteenth century that the idea of an abstract group as a set with operations satisfying a certain system of axioms began to take hold. A typical way of specifying an abstract group is through a presentation by generators and relations, a significant source of abstract groups is given by the construction of a factor group, or quotient group, G/H, of a group G by a normal subgroup H. Class groups of algebraic number fields were among the earliest examples of factor groups, of much interest in number theory
Group theory
–
Water molecule with symmetry axis
Group theory
–
The popular puzzle Rubik's cube invented in 1974 by Ernő Rubik has been used as an illustration of permutation groups.
76.
Theory of computation
–
In theoretical computer science and mathematics, the theory of computation is the branch that deals with how efficiently problems can be solved on a model of computation, using an algorithm. In order to perform a study of computation, computer scientists work with a mathematical abstraction of computers called a model of computation. There are several models in use, but the most commonly examined is the Turing machine and it might seem that the potentially infinite memory capacity is an unrealizable attribute, but any decidable problem solved by a Turing machine will always require only a finite amount of memory. So in principle, any problem that can be solved by a Turing machine can be solved by a computer that has an amount of memory. The theory of computation can be considered the creation of models of all kinds in the field of computer science, therefore, mathematics and logic are used. In the last century it became an independent academic discipline and was separated from mathematics, some pioneers of the theory of computation were Alonzo Church, Kurt Gödel, Alan Turing, Stephen Kleene, John von Neumann and Claude Shannon. Automata theory is the study of abstract machines and the problems that can be solved using these machines. These abstract machines are called automata, Automata comes from the Greek word which means that something is doing something by itself. Automata theory is closely related to formal language theory, as the automata are often classified by the class of formal languages they are able to recognize. An automaton can be a representation of a formal language that may be an infinite set. Automata are used as models for computing machines, and are used for proofs about computability. Language theory is a branch of mathematics concerned with describing languages as a set of operations over an alphabet and it is closely linked with automata theory, as automata are used to generate and recognize formal languages. Because automata are used as models for computation, formal languages are the mode of specification for any problem that must be computed. Computability theory deals primarily with the question of the extent to which a problem is solvable on a computer, much of computability theory builds on the halting problem result. Many mathematicians and computational theorists who study recursion theory will refer to it as computability theory, Complexity theory considers not only whether a problem can be solved at all on a computer, but also how efficiently the problem can be solved. In order to analyze how much time and space a given algorithm requires, for example, finding a particular number in a long list of numbers becomes harder as the list of numbers grows larger. If we say there are n numbers in the list, then if the list is not sorted or indexed in any way we may have to look at every number in order to find the number were seeking. We thus say that in order to solve this problem, the needs to perform a number of steps that grows linearly in the size of the problem
Theory of computation
–
An artistic representation of a Turing machine. Turing machines are frequently used as theoretical models for computing.
77.
Differential equation
–
A differential equation is a mathematical equation that relates some function with its derivatives. In applications, the functions usually represent physical quantities, the derivatives represent their rates of change, because such relations are extremely common, differential equations play a prominent role in many disciplines including engineering, physics, economics, and biology. In pure mathematics, differential equations are studied from different perspectives. Only the simplest differential equations are solvable by explicit formulas, however, if a self-contained formula for the solution is not available, the solution may be numerically approximated using computers. Differential equations first came into existence with the invention of calculus by Newton, jacob Bernoulli proposed the Bernoulli differential equation in 1695. This is a differential equation of the form y ′ + P y = Q y n for which the following year Leibniz obtained solutions by simplifying it. Historically, the problem of a string such as that of a musical instrument was studied by Jean le Rond dAlembert, Leonhard Euler, Daniel Bernoulli. In 1746, d’Alembert discovered the wave equation, and within ten years Euler discovered the three-dimensional wave equation. The Euler–Lagrange equation was developed in the 1750s by Euler and Lagrange in connection with their studies of the tautochrone problem. This is the problem of determining a curve on which a particle will fall to a fixed point in a fixed amount of time. Lagrange solved this problem in 1755 and sent the solution to Euler, both further developed Lagranges method and applied it to mechanics, which led to the formulation of Lagrangian mechanics. Contained in this book was Fouriers proposal of his heat equation for conductive diffusion of heat and this partial differential equation is now taught to every student of mathematical physics. For example, in mechanics, the motion of a body is described by its position. Newtons laws allow one to express these variables dynamically as an equation for the unknown position of the body as a function of time. In some cases, this equation may be solved explicitly. An example of modelling a real world problem using differential equations is the determination of the velocity of a ball falling through the air, considering only gravity, the balls acceleration towards the ground is the acceleration due to gravity minus the acceleration due to air resistance. Gravity is considered constant, and air resistance may be modeled as proportional to the balls velocity and this means that the balls acceleration, which is a derivative of its velocity, depends on the velocity. Finding the velocity as a function of time involves solving a differential equation, Differential equations can be divided into several types
Differential equation
–
Navier–Stokes differential equations used to simulate airflow around an obstruction.
78.
Game theory
–
Game theory is the study of mathematical models of conflict and cooperation between intelligent rational decision-makers. Game theory is used in economics, political science, and psychology, as well as logic, computer science. Originally, it addressed zero-sum games, in one persons gains result in losses for the other participants. Today, game theory applies to a range of behavioral relations, and is now an umbrella term for the science of logical decision making in humans, animals. Modern game theory began with the idea regarding the existence of equilibria in two-person zero-sum games. Von Neumanns original proof used Brouwer fixed-point theorem on continuous mappings into compact convex sets and his paper was followed by the 1944 book Theory of Games and Economic Behavior, co-written with Oskar Morgenstern, which considered cooperative games of several players. The second edition of this provided an axiomatic theory of expected utility. This theory was developed extensively in the 1950s by many scholars, Game theory was later explicitly applied to biology in the 1970s, although similar developments go back at least as far as the 1930s. Game theory has been recognized as an important tool in many fields. With the Nobel Memorial Prize in Economic Sciences going to game theorist Jean Tirole in 2014, John Maynard Smith was awarded the Crafoord Prize for his application of game theory to biology. Early discussions of examples of two-person games occurred long before the rise of modern, the first known discussion of game theory occurred in a letter written by Charles Waldegrave, an active Jacobite, and uncle to James Waldegrave, a British diplomat, in 1713. In this letter, Waldegrave provides a mixed strategy solution to a two-person version of the card game le Her. James Madison made what we now recognize as an analysis of the ways states can be expected to behave under different systems of taxation. In 1913 Ernst Zermelo published Über eine Anwendung der Mengenlehre auf die Theorie des Schachspiels and it proved that the optimal chess strategy is strictly determined. This paved the way for more general theorems, the Danish mathematician Zeuthen proved that the mathematical model had a winning strategy by using Brouwers fixed point theorem. In his 1938 book Applications aux Jeux de Hasard and earlier notes, Borel conjectured that non-existence of mixed-strategy equilibria in two-person zero-sum games would occur, a conjecture that was proved false. Game theory did not really exist as a field until John von Neumann published a paper in 1928. Von Neumanns original proof used Brouwers fixed-point theorem on continuous mappings into compact convex sets and his paper was followed by his 1944 book Theory of Games and Economic Behavior co-authored with Oskar Morgenstern
Game theory
–
An extensive form game
79.
Discrete geometry
–
Discrete geometry and combinatorial geometry are branches of geometry that study combinatorial properties and constructive methods of discrete geometric objects. Most questions in discrete geometry involve finite or discrete sets of geometric objects, such as points, lines, planes, circles, spheres, polygons. The subject focuses on the properties of these objects, such as how they intersect one another. Although polyhedra and tessellations had been studied for years by people such as Kepler and Cauchy. Coxeter and Paul Erdős, laid the foundations of discrete geometry, a polytope is a geometric object with flat sides, which exists in any general number of dimensions. A polygon is a polytope in two dimensions, a polyhedron in three dimensions, and so on in higher dimensions, some theories further generalize the idea to include such objects as unbounded polytopes, and abstract polytopes. A sphere packing is an arrangement of non-overlapping spheres within a containing space, the spheres considered are usually all of identical size, and the space is usually three-dimensional Euclidean space. However, sphere packing problems can be generalised to consider unequal spheres, a tessellation of a flat surface is the tiling of a plane using one or more geometric shapes, called tiles, with no overlaps and no gaps. In mathematics, tessellations can be generalized to higher dimensions, topics in this area include, Cauchys theorem Flexible polyhedra Incidence structures generalize planes as can be seen from their axiomatic definitions. Incidence structures also generalize the higher-dimensional analogs and the structures are sometimes called finite geometries. Formally, a structure is a triple C =. Where P is a set of points, L is a set of lines, the elements of I are called flags. If ∈ I, we say that point p lies on line l, a geometric graph is a graph in which the vertices or edges are associated with geometric objects. Examples include Euclidean graphs, the 1-skeleton of a polyhedron or polytope, intersection graphs, simplicial complexes should not be confused with the more abstract notion of a simplicial set appearing in modern simplicial homotopy theory. The purely combinatorial counterpart to a complex is an abstract simplicial complex. The discipline of combinatorial topology used combinatorial concepts in topology and in the early 20th century this turned into the field of algebraic topology, lovászs proof used the Borsuk-Ulam theorem and this theorem retains a prominent role in this new field. This theorem has many equivalent versions and analogs and has used in the study of fair division problems. Topics in this include, Sperners lemma Regular maps A discrete group is a group G equipped with the discrete topology
Discrete geometry
–
A collection of circles and the corresponding unit disk graph
80.
Algebraic geometry
–
Algebraic geometry is a branch of mathematics, classically studying zeros of multivariate polynomials. Modern algebraic geometry is based on the use of abstract algebraic techniques, mainly from commutative algebra, the fundamental objects of study in algebraic geometry are algebraic varieties, which are geometric manifestations of solutions of systems of polynomial equations. A point of the plane belongs to a curve if its coordinates satisfy a given polynomial equation. Basic questions involve the study of the points of special interest like the points, the inflection points. More advanced questions involve the topology of the curve and relations between the curves given by different equations, Algebraic geometry occupies a central place in modern mathematics and has multiple conceptual connections with such diverse fields as complex analysis, topology and number theory. In the 20th century, algebraic geometry split into several subareas, the mainstream of algebraic geometry is devoted to the study of the complex points of the algebraic varieties and more generally to the points with coordinates in an algebraically closed field. The study of the points of a variety with coordinates in the field of the rational numbers or in a number field became arithmetic geometry. The study of the points of an algebraic variety is the subject of real algebraic geometry. A large part of singularity theory is devoted to the singularities of algebraic varieties, with the rise of the computers, a computational algebraic geometry area has emerged, which lies at the intersection of algebraic geometry and computer algebra. It consists essentially in developing algorithms and software for studying and finding the properties of explicitly given algebraic varieties and this means that a point of such a scheme may be either a usual point or a subvariety. This approach also enables a unification of the language and the tools of algebraic geometry, mainly concerned with complex points. Wiless proof of the longstanding conjecture called Fermats last theorem is an example of the power of this approach. For instance, the sphere in three-dimensional Euclidean space R3 could be defined as the set of all points with x 2 + y 2 + z 2 −1 =0. A slanted circle in R3 can be defined as the set of all points which satisfy the two polynomial equations x 2 + y 2 + z 2 −1 =0, x + y + z =0, first we start with a field k. In classical algebraic geometry, this field was always the complex numbers C and we consider the affine space of dimension n over k, denoted An. When one fixes a system, one may identify An with kn. The purpose of not working with kn is to emphasize that one forgets the vector space structure that kn carries, the property of a function to be polynomial does not depend on the choice of a coordinate system in An. When a coordinate system is chosen, the functions on the affine n-space may be identified with the ring of polynomial functions in n variables over k
Algebraic geometry
–
This Togliatti surface is an algebraic surface of degree five. The picture represents a portion of its real locus.
81.
Analytic geometry
–
In classical mathematics, analytic geometry, also known as coordinate geometry, or Cartesian geometry, is the study of geometry using a coordinate system. Analytic geometry is used in physics and engineering, and also in aviation, rocketry, space science. It is the foundation of most modern fields of geometry, including algebraic, differential, discrete, usually the Cartesian coordinate system is applied to manipulate equations for planes, straight lines, and squares, often in two and sometimes in three dimensions. Geometrically, one studies the Euclidean plane and Euclidean space, the numerical output, however, might also be a vector or a shape. That the algebra of the numbers can be employed to yield results about the linear continuum of geometry relies on the Cantor–Dedekind axiom. Apollonius in the Conics further developed a method that is so similar to analytic geometry that his work is thought to have anticipated the work of Descartes by some 1800 years. He further developed relations between the abscissas and the corresponding ordinates that are equivalent to rhetorical equations of curves and that is, equations were determined by curves, but curves were not determined by equations. Coordinates, variables, and equations were subsidiary notions applied to a specific geometric situation, analytic geometry was independently invented by René Descartes and Pierre de Fermat, although Descartes is sometimes given sole credit. Cartesian geometry, the term used for analytic geometry, is named after Descartes. This work, written in his native French tongue, and its philosophical principles, initially the work was not well received, due, in part, to the many gaps in arguments and complicated equations. Only after the translation into Latin and the addition of commentary by van Schooten in 1649 did Descartess masterpiece receive due recognition, Pierre de Fermat also pioneered the development of analytic geometry. Although not published in his lifetime, a form of Ad locos planos et solidos isagoge was circulating in Paris in 1637. Clearly written and well received, the Introduction also laid the groundwork for analytical geometry, as a consequence of this approach, Descartes had to deal with more complicated equations and he had to develop the methods to work with polynomial equations of higher degree. It was Leonard Euler who first applied the method in a systematic study of space curves and surfaces. In analytic geometry, the plane is given a coordinate system, similarly, Euclidean space is given coordinates where every point has three coordinates. The value of the coordinates depends on the choice of the point of origin. These are typically written as an ordered pair and this system can also be used for three-dimensional geometry, where every point in Euclidean space is represented by an ordered triple of coordinates. In polar coordinates, every point of the plane is represented by its distance r from the origin and its angle θ from the polar axis
Analytic geometry
–
Cartesian coordinates
82.
Differential geometry
–
Differential geometry is a mathematical discipline that uses the techniques of differential calculus, integral calculus, linear algebra and multilinear algebra to study problems in geometry. The theory of plane and space curves and surfaces in the three-dimensional Euclidean space formed the basis for development of differential geometry during the 18th century, since the late 19th century, differential geometry has grown into a field concerned more generally with the geometric structures on differentiable manifolds. Differential geometry is related to differential topology and the geometric aspects of the theory of differential equations. The differential geometry of surfaces captures many of the key ideas, Differential geometry arose and developed as a result of and in connection to the mathematical analysis of curves and surfaces. These unanswered questions indicated greater, hidden relationships, initially applied to the Euclidean space, further explorations led to non-Euclidean space, and metric and topological spaces. Riemannian geometry studies Riemannian manifolds, smooth manifolds with a Riemannian metric and this is a concept of distance expressed by means of a smooth positive definite symmetric bilinear form defined on the tangent space at each point. Various concepts based on length, such as the arc length of curves, area of plane regions, the notion of a directional derivative of a function from multivariable calculus is extended in Riemannian geometry to the notion of a covariant derivative of a tensor. Many concepts and techniques of analysis and differential equations have been generalized to the setting of Riemannian manifolds, a distance-preserving diffeomorphism between Riemannian manifolds is called an isometry. This notion can also be defined locally, i. e. for small neighborhoods of points, any two regular curves are locally isometric. In higher dimensions, the Riemann curvature tensor is an important pointwise invariant associated with a Riemannian manifold that measures how close it is to being flat, an important class of Riemannian manifolds is the Riemannian symmetric spaces, whose curvature is not necessarily constant. These are the closest analogues to the plane and space considered in Euclidean and non-Euclidean geometry. Pseudo-Riemannian geometry generalizes Riemannian geometry to the case in which the metric tensor need not be positive-definite, a special case of this is a Lorentzian manifold, which is the mathematical basis of Einsteins general relativity theory of gravity. Finsler geometry has the Finsler manifold as the object of study. This is a manifold with a Finsler metric, i. e. a Banach norm defined on each tangent space. Riemannian manifolds are special cases of the more general Finsler manifolds. A Finsler structure on a manifold M is a function F, TM → [0, ∞) such that, F = |m|F for all x, y in TM, F is infinitely differentiable in TM −, symplectic geometry is the study of symplectic manifolds. A symplectic manifold is an almost symplectic manifold for which the symplectic form ω is closed, a diffeomorphism between two symplectic manifolds which preserves the symplectic form is called a symplectomorphism. Non-degenerate skew-symmetric bilinear forms can only exist on even-dimensional vector spaces, in dimension 2, a symplectic manifold is just a surface endowed with an area form and a symplectomorphism is an area-preserving diffeomorphism
Differential geometry
–
A triangle immersed in a saddle-shape plane (a hyperbolic paraboloid), as well as two diverging ultraparallel lines.
83.
Graph theory
–
In mathematics graph theory is the study of graphs, which are mathematical structures used to model pairwise relations between objects. A graph in this context is made up of vertices, nodes, or points which are connected by edges, arcs, Graphs are one of the prime objects of study in discrete mathematics. Refer to the glossary of graph theory for basic definitions in graph theory, the following are some of the more basic ways of defining graphs and related mathematical structures. To avoid ambiguity, this type of graph may be described precisely as undirected, other senses of graph stem from different conceptions of the edge set. In one more generalized notion, V is a set together with a relation of incidence that associates with each two vertices. In another generalized notion, E is a multiset of unordered pairs of vertices, Many authors call this type of object a multigraph or pseudograph. All of these variants and others are described more fully below, the vertices belonging to an edge are called the ends or end vertices of the edge. A vertex may exist in a graph and not belong to an edge, V and E are usually taken to be finite, and many of the well-known results are not true for infinite graphs because many of the arguments fail in the infinite case. The order of a graph is |V|, its number of vertices, the size of a graph is |E|, its number of edges. The degree or valency of a vertex is the number of edges that connect to it, for an edge, graph theorists usually use the somewhat shorter notation xy. Graphs can be used to model many types of relations and processes in physical, biological, social, Many practical problems can be represented by graphs. Emphasizing their application to real-world systems, the network is sometimes defined to mean a graph in which attributes are associated with the nodes and/or edges. In computer science, graphs are used to represent networks of communication, data organization, computational devices, the flow of computation, etc. For instance, the structure of a website can be represented by a directed graph, in which the vertices represent web pages. A similar approach can be taken to problems in media, travel, biology, computer chip design. The development of algorithms to handle graphs is therefore of major interest in computer science, the transformation of graphs is often formalized and represented by graph rewrite systems. Graph-theoretic methods, in forms, have proven particularly useful in linguistics. Traditionally, syntax and compositional semantics follow tree-based structures, whose power lies in the principle of compositionality
Graph theory
–
A drawing of a graph.
84.
Information theory
–
Information theory studies the quantification, storage, and communication of information. A key measure in information theory is entropy, entropy quantifies the amount of uncertainty involved in the value of a random variable or the outcome of a random process. For example, identifying the outcome of a coin flip provides less information than specifying the outcome from a roll of a die. Some other important measures in information theory are mutual information, channel capacity, error exponents, applications of fundamental topics of information theory include lossless data compression, lossy data compression, and channel coding. The field is at the intersection of mathematics, statistics, computer science, physics, neurobiology, Information theory studies the transmission, processing, utilization, and extraction of information. Abstractly, information can be thought of as the resolution of uncertainty, Information theory is a broad and deep mathematical theory, with equally broad and deep applications, amongst which is the vital field of coding theory. These codes can be subdivided into data compression and error-correction techniques. In the latter case, it took years to find the methods Shannons work proved were possible. A third class of information theory codes are cryptographic algorithms, concepts, methods and results from coding theory and information theory are widely used in cryptography and cryptanalysis. See the article ban for a historical application, Information theory is also used in information retrieval, intelligence gathering, gambling, statistics, and even in musical composition. Prior to this paper, limited information-theoretic ideas had been developed at Bell Labs, the unit of information was therefore the decimal digit, much later renamed the hartley in his honour as a unit or scale or measure of information. Alan Turing in 1940 used similar ideas as part of the analysis of the breaking of the German second world war Enigma ciphers. Much of the mathematics behind information theory with events of different probabilities were developed for the field of thermodynamics by Ludwig Boltzmann, Information theory is based on probability theory and statistics. Information theory often concerns itself with measures of information of the associated with random variables. Important quantities of information are entropy, a measure of information in a random variable, and mutual information. The choice of base in the following formulae determines the unit of information entropy that is used. A common unit of information is the bit, based on the binary logarithm, other units include the nat, which is based on the natural logarithm, and the hartley, which is based on the common logarithm. In what follows, an expression of the form p log p is considered by convention to be equal to zero whenever p =0 and this is justified because lim p →0 + p log p =0 for any logarithmic base
Information theory
–
A picture showing scratches on the readable surface of a CD-R. Music and data CDs are coded using error correcting codes and thus can still be read even if they have minor scratches using error detection and correction.
Information theory
–
Entropy of a Bernoulli trial as a function of success probability, often called the binary entropy function,. The entropy is maximized at 1 bit per trial when the two possible outcomes are equally probable, as in an unbiased coin toss.
85.
Mathematical physics
–
Mathematical physics refers to development of mathematical methods for application to problems in physics. It is a branch of applied mathematics, but deals with physical problems, there are several distinct branches of mathematical physics, and these roughly correspond to particular historical periods. The rigorous, abstract and advanced re-formulation of Newtonian mechanics adopting the Lagrangian mechanics, both formulations are embodied in analytical mechanics. These approaches and ideas can be and, in fact, have extended to other areas of physics as statistical mechanics, continuum mechanics, classical field theory. Moreover, they have provided several examples and basic ideas in differential geometry, the theory of partial differential equations are perhaps most closely associated with mathematical physics. These were developed intensively from the half of the eighteenth century until the 1930s. Physical applications of these developments include hydrodynamics, celestial mechanics, continuum mechanics, elasticity theory, acoustics, thermodynamics, electricity, magnetism, and aerodynamics. The theory of atomic spectra developed almost concurrently with the fields of linear algebra. Nonrelativistic quantum mechanics includes Schrödinger operators, and it has connections to atomic, Quantum information theory is another subspecialty. The special and general theories of relativity require a different type of mathematics. This was group theory, which played an important role in quantum field theory and differential geometry. This was, however, gradually supplemented by topology and functional analysis in the description of cosmological as well as quantum field theory phenomena. In this area both homological algebra and category theory are important nowadays, statistical mechanics forms a separate field, which includes the theory of phase transitions. It relies upon the Hamiltonian mechanics and it is related with the more mathematical ergodic theory. There are increasing interactions between combinatorics and physics, in statistical physics. The usage of the mathematical physics is sometimes idiosyncratic. Certain parts of mathematics that arose from the development of physics are not, in fact, considered parts of mathematical physics. The term mathematical physics is sometimes used to research aimed at studying and solving problems inspired by physics or thought experiments within a mathematically rigorous framework
Mathematical physics
–
An example of mathematical physics: solutions of Schrödinger's equation for quantum harmonic oscillators (left) with their amplitudes (right).
86.
Mathematical statistics
–
Mathematical techniques which are used for this include mathematical analysis, linear algebra, stochastic analysis, differential equations, and measure-theoretic probability theory. Statistical science is concerned with the planning of studies, especially with the design of randomized experiments, the initial analysis of the data from properly randomized studies often follows the study protocol. Of course, the data from a study can be analyzed to consider secondary hypotheses or to suggest new ideas. A secondary analysis of the data from a planned study uses tools from data analysis, data analysis is divided into, descriptive statistics - the part of statistics that describes data, i. e. summarises the data and their typical properties. Mathematical statistics has been inspired by and has extended many options in applied statistics, more complex experiments, such as those involving stochastic processes defined in continuous time, may demand the use of more general probability measures. A probability distribution can either be univariate or multivariate, important and commonly encountered univariate probability distributions include the binomial distribution, the hypergeometric distribution, and the normal distribution. The multivariate normal distribution is a commonly encountered multivariate distribution. g, inferential statistics are used to test hypotheses and make estimations using sample data. Whereas descriptive statistics describe a sample, inferential statistics infer predictions about a population that the sample represents. The outcome of statistical inference may be an answer to the question what should be done next, where this might be a decision about making further experiments or surveys, or about drawing a conclusion before implementing some organizational or governmental policy. For the most part, statistical inference makes propositions about populations, more generally, data about a random process is obtained from its observed behavior during a finite period of time. e. In statistics, regression analysis is a process for estimating the relationships among variables. It includes many techniques for modeling and analyzing several variables, when the focus is on the relationship between a dependent variable and one or more independent variables. Less commonly, the focus is on a quantile, or other parameter of the conditional distribution of the dependent variable given the independent variables. In all cases, the target is a function of the independent variables called the regression function. In regression analysis, it is also of interest to characterize the variation of the dependent variable around the function which can be described by a probability distribution. Many techniques for carrying out regression analysis have been developed, nonparametric regression refers to techniques that allow the regression function to lie in a specified set of functions, which may be infinite-dimensional. Nonparametric statistics are not based on parameterized families of probability distributions. They include both descriptive and inferential statistics, the typical parameters are the mean, variance, etc
Mathematical statistics
–
Illustration of linear regression on a data set. Regression analysis is an important part of mathematical statistics.
87.
Numerical analysis
–
Numerical analysis is the study of algorithms that use numerical approximation for the problems of mathematical analysis. Being able to compute the sides of a triangle is important, for instance, in astronomy, carpentry. Numerical analysis continues this tradition of practical mathematical calculations. Much like the Babylonian approximation of the root of 2, modern numerical analysis does not seek exact answers. Instead, much of numerical analysis is concerned with obtaining approximate solutions while maintaining reasonable bounds on errors, before the advent of modern computers numerical methods often depended on hand interpolation in large printed tables. Since the mid 20th century, computers calculate the required functions instead and these same interpolation formulas nevertheless continue to be used as part of the software algorithms for solving differential equations. Computing the trajectory of a spacecraft requires the accurate numerical solution of a system of differential equations. Car companies can improve the safety of their vehicles by using computer simulations of car crashes. Such simulations essentially consist of solving differential equations numerically. Hedge funds use tools from all fields of analysis to attempt to calculate the value of stocks. Airlines use sophisticated optimization algorithms to decide ticket prices, airplane and crew assignments, historically, such algorithms were developed within the overlapping field of operations research. Insurance companies use programs for actuarial analysis. The rest of this section outlines several important themes of numerical analysis, the field of numerical analysis predates the invention of modern computers by many centuries. Linear interpolation was already in use more than 2000 years ago, to facilitate computations by hand, large books were produced with formulas and tables of data such as interpolation points and function coefficients. The function values are no very useful when a computer is available. The mechanical calculator was developed as a tool for hand computation. These calculators evolved into electronic computers in the 1940s, and it was found that these computers were also useful for administrative purposes. But the invention of the computer also influenced the field of analysis, since now longer
Numerical analysis
–
Babylonian clay tablet YBC 7289 (c. 1800–1600 BC) with annotations. The approximation of the square root of 2 is four sexagesimal figures, which is about six decimal figures. 1 + 24/60 + 51/60 2 + 10/60 3 = 1.41421296...
Numerical analysis
–
Direct method
Numerical analysis
88.
Mathematical optimization
–
In mathematics, computer science and operations research, mathematical optimization, also spelled mathematical optimisation, is the selection of a best element from some set of available alternatives. The generalization of optimization theory and techniques to other formulations comprises an area of applied mathematics. Such a formulation is called a problem or a mathematical programming problem. Many real-world and theoretical problems may be modeled in this general framework, typically, A is some subset of the Euclidean space Rn, often specified by a set of constraints, equalities or inequalities that the members of A have to satisfy. The domain A of f is called the space or the choice set. The function f is called, variously, a function, a loss function or cost function, a utility function or fitness function, or, in certain fields. A feasible solution that minimizes the objective function is called an optimal solution, in mathematics, conventional optimization problems are usually stated in terms of minimization. Generally, unless both the function and the feasible region are convex in a minimization problem, there may be several local minima. While a local minimum is at least as good as any nearby points, a global minimum is at least as good as every feasible point. In a convex problem, if there is a minimum that is interior, it is also the global minimum. Optimization problems are often expressed with special notation, consider the following notation, min x ∈ R This denotes the minimum value of the objective function x 2 +1, when choosing x from the set of real numbers R. The minimum value in case is 1, occurring at x =0. Similarly, the notation max x ∈ R2 x asks for the value of the objective function 2x. In this case, there is no such maximum as the function is unbounded. This represents the value of the argument x in the interval, John Wiley & Sons, Ltd. pp. xxviii+489. (2008 Second ed. in French, Programmation mathématique, Théorie et algorithmes, Editions Tec & Doc, Paris,2008. Nemhauser, G. L. Rinnooy Kan, A. H. G. Todd, handbooks in Operations Research and Management Science. Amsterdam, North-Holland Publishing Co. pp. xiv+709, J. E. Dennis, Jr. and Robert B
Mathematical optimization
–
Graph of a paraboloid given by f(x, y) = −(x ² + y ²) + 4. The global maximum at (0, 0, 4) is indicated by a red dot.
89.
Order theory
–
Order theory is a branch of mathematics which investigates the intuitive notion of order using binary relations. It provides a framework for describing statements such as this is less than that or this precedes that. This article introduces the field and provides basic definitions, a list of order-theoretic terms can be found in the order theory glossary. Orders are everywhere in mathematics and related fields like computer science. The first order often discussed in primary school is the order on the natural numbers e. g.2 is less than 3,10 is greater than 5. This intuitive concept can be extended to orders on sets of numbers, such as the integers. The idea of being greater than or less than another number is one of the basic intuitions of number systems in general, other familiar examples of orderings are the alphabetical order of words in a dictionary and the genealogical property of lineal descent within a group of people. The notion of order is very general, extending beyond contexts that have an immediate, in other contexts orders may capture notions of containment or specialization. Abstractly, this type of order amounts to the relation, e. g. Pediatricians are physicians. However, many other orders do not and those orders like the subset-of relation for which there exist incomparable elements are called partial orders, orders for which every pair of elements is comparable are total orders. Order theory captures the intuition of orders that arises from such examples in a general setting and this is achieved by specifying properties that a relation ≤ must have to be a mathematical order. This more abstract approach makes sense, because one can derive numerous theorems in the general setting. These insights can then be transferred to many less abstract applications. Driven by the wide usage of orders, numerous special kinds of ordered sets have been defined. In addition, order theory does not restrict itself to the classes of ordering relations. A simple example of an order theoretic property for functions comes from analysis where monotone functions are frequently found and this section introduces ordered sets by building upon the concepts of set theory, arithmetic, and binary relations. Suppose that P is a set and that ≤ is a relation on P, a set with a partial order on it is called a partially ordered set, poset, or just an ordered set if the intended meaning is clear. By checking these properties, one sees that the well-known orders on natural numbers, integers, rational numbers
Order theory
–
Hasse diagram of the set of all divisors of 60, partially ordered by divisibility
90.
Philosophy of mathematics
–
The philosophy of mathematics is the branch of philosophy that studies the philosophical assumptions, foundations, and implications of mathematics. The aim of the philosophy of mathematics is to provide an account of the nature and methodology of mathematics, the logical and structural nature of mathematics itself makes this study both broad and unique among its philosophical counterparts. The terms philosophy of mathematics and mathematical philosophy are frequently used interchangeably, the latter, however, may be used to refer to several other areas of study. Another refers to the philosophy of an individual practitioner or a like-minded community of practicing mathematicians. Recurrent themes include, What is the role of Mankind in developing mathematics, what are the sources of mathematical subject matter. What is the status of mathematical entities. What does it mean to refer to a mathematical object, what is the character of a mathematical proposition. What is the relation between logic and mathematics, what is the role of hermeneutics in mathematics. What kinds of play a role in mathematics. What are the objectives of mathematical inquiry, what gives mathematics its hold on experience. What are the human traits behind mathematics, what is the source and nature of mathematical truth. What is the relationship between the world of mathematics and the material universe. The origin of mathematics is subject to argument, whether the birth of mathematics was a random happening or induced by necessity duly contingent upon other subjects, say for example physics, is still a matter of prolific debates. Many thinkers have contributed their ideas concerning the nature of mathematics, there are traditions of mathematical philosophy in both Western philosophy and Eastern philosophy. Greek philosophy on mathematics was strongly influenced by their study of geometry, for example, at one time, the Greeks held the opinion that 1 was not a number, but rather a unit of arbitrary length. A number was defined as a multitude, therefore,3, for example, represented a certain multitude of units, and was thus not truly a number. At another point, an argument was made that 2 was not a number. These earlier Greek ideas of numbers were later upended by the discovery of the irrationality of the root of two
Philosophy of mathematics
–
David Hilbert
91.
Set theory
–
Set theory is a branch of mathematical logic that studies sets, which informally are collections of objects. Although any type of object can be collected into a set, set theory is applied most often to objects that are relevant to mathematics, the language of set theory can be used in the definitions of nearly all mathematical objects. The modern study of set theory was initiated by Georg Cantor, Set theory is commonly employed as a foundational system for mathematics, particularly in the form of Zermelo–Fraenkel set theory with the axiom of choice. Beyond its foundational role, set theory is a branch of mathematics in its own right, contemporary research into set theory includes a diverse collection of topics, ranging from the structure of the real number line to the study of the consistency of large cardinals. Mathematical topics typically emerge and evolve through interactions among many researchers, Set theory, however, was founded by a single paper in 1874 by Georg Cantor, On a Property of the Collection of All Real Algebraic Numbers. Since the 5th century BC, beginning with Greek mathematician Zeno of Elea in the West and early Indian mathematicians in the East, especially notable is the work of Bernard Bolzano in the first half of the 19th century. Modern understanding of infinity began in 1867–71, with Cantors work on number theory, an 1872 meeting between Cantor and Richard Dedekind influenced Cantors thinking and culminated in Cantors 1874 paper. Cantors work initially polarized the mathematicians of his day, while Karl Weierstrass and Dedekind supported Cantor, Leopold Kronecker, now seen as a founder of mathematical constructivism, did not. This utility of set theory led to the article Mengenlehre contributed in 1898 by Arthur Schoenflies to Kleins encyclopedia, in 1899 Cantor had himself posed the question What is the cardinal number of the set of all sets. Russell used his paradox as a theme in his 1903 review of continental mathematics in his The Principles of Mathematics, in 1906 English readers gained the book Theory of Sets of Points by William Henry Young and his wife Grace Chisholm Young, published by Cambridge University Press. The momentum of set theory was such that debate on the paradoxes did not lead to its abandonment, the work of Zermelo in 1908 and Abraham Fraenkel in 1922 resulted in the set of axioms ZFC, which became the most commonly used set of axioms for set theory. The work of such as Henri Lebesgue demonstrated the great mathematical utility of set theory. Set theory is used as a foundational system, although in some areas category theory is thought to be a preferred foundation. Set theory begins with a binary relation between an object o and a set A. If o is a member of A, the notation o ∈ A is used, since sets are objects, the membership relation can relate sets as well. A derived binary relation between two sets is the relation, also called set inclusion. If all the members of set A are also members of set B, then A is a subset of B, for example, is a subset of, and so is but is not. As insinuated from this definition, a set is a subset of itself, for cases where this possibility is unsuitable or would make sense to be rejected, the term proper subset is defined
Set theory
–
Georg Cantor
Set theory
–
A Venn diagram illustrating the intersection of two sets.
92.
Discrete mathematics
–
Discrete mathematics is the study of mathematical structures that are fundamentally discrete rather than continuous. Discrete mathematics therefore excludes topics in mathematics such as calculus. Discrete objects can often be enumerated by integers, more formally, discrete mathematics has been characterized as the branch of mathematics dealing with countable sets. However, there is no definition of the term discrete mathematics. Indeed, discrete mathematics is described less by what is included than by what is excluded, continuously varying quantities, the set of objects studied in discrete mathematics can be finite or infinite. The term finite mathematics is sometimes applied to parts of the field of mathematics that deals with finite sets. Conversely, computer implementations are significant in applying ideas from mathematics to real-world problems. Although the main objects of study in mathematics are discrete objects. In university curricula, Discrete Mathematics appeared in the 1980s, initially as a computer science support course, some high-school-level discrete mathematics textbooks have appeared as well. At this level, discrete mathematics is seen as a preparatory course. The Fulkerson Prize is awarded for outstanding papers in discrete mathematics, the history of discrete mathematics has involved a number of challenging problems which have focused attention within areas of the field. In graph theory, much research was motivated by attempts to prove the four color theorem, first stated in 1852, in logic, the second problem on David Hilberts list of open problems presented in 1900 was to prove that the axioms of arithmetic are consistent. Gödels second incompleteness theorem, proved in 1931, showed that this was not possible – at least not within arithmetic itself, Hilberts tenth problem was to determine whether a given polynomial Diophantine equation with integer coefficients has an integer solution. In 1970, Yuri Matiyasevich proved that this could not be done, at the same time, military requirements motivated advances in operations research. The Cold War meant that cryptography remained important, with fundamental advances such as public-key cryptography being developed in the following decades, operations research remained important as a tool in business and project management, with the critical path method being developed in the 1950s. The telecommunication industry has also motivated advances in mathematics, particularly in graph theory. Formal verification of statements in logic has been necessary for development of safety-critical systems. Computational geometry has been an important part of the computer graphics incorporated into modern video games, currently, one of the most famous open problems in theoretical computer science is the P = NP problem, which involves the relationship between the complexity classes P and NP
Discrete mathematics
–
Graphs like this are among the objects studied by discrete mathematics, for their interesting mathematical properties, their usefulness as models of real-world problems, and their importance in developing computer algorithms.