1.
YBC 7289
–
The square root of 2, or the th power of 2, written in mathematics as √2 or 2 1⁄2, is the positive algebraic number that, when multiplied by itself, gives the number 2. Technically, it is called the square root of 2. Geometrically the square root of 2 is the length of a diagonal across a square sides of one unit of length. It was probably the first number known to be irrational, the rational approximation of the square root of two,665, 857/470,832, derived from the fourth step in the Babylonian algorithm starting with a0 =1, is too large by approx. 1. 6×10−12, its square is 2. 0000000000045… The rational approximation 99/70 is frequently used, despite having a denominator of only 70, it differs from the correct value by less than 1/10,000. The numerical value for the root of two, truncated to 65 decimal places, is,1. 41421356237309504880168872420969807856967187537694807317667973799….41421296 ¯. That is,1 +13 +13 ×4 −13 ×4 ×34 =577408 =1.4142156862745098039 ¯. This approximation is the seventh in a sequence of increasingly accurate approximations based on the sequence of Pell numbers, despite having a smaller denominator, it is only slightly less accurate than the Babylonian approximation. Pythagoreans discovered that the diagonal of a square is incommensurable with its side, or in modern language, little is known with certainty about the time or circumstances of this discovery, but the name of Hippasus of Metapontum is often mentioned. For a while, the Pythagoreans treated as a secret the discovery that the square root of two is irrational, and, according to legend, Hippasus was murdered for divulging it. The square root of two is occasionally called Pythagoras number or Pythagoras constant, for example by Conway & Guy, there are a number of algorithms for approximating √2, which in expressions as a ratio of integers or as a decimal can only be approximated. The most common algorithm for this, one used as a basis in many computers and calculators, is the Babylonian method of computing square roots, which is one of many methods of computing square roots. It goes as follows, First, pick a guess, a0 >0, then, using that guess, iterate through the following recursive computation, a n +1 = a n +2 a n 2 = a n 2 +1 a n. The more iterations through the algorithm, the approximation of the square root of 2 is achieved. Each iteration approximately doubles the number of correct digits, starting with a0 =1 the next approximations are 3/2 =1.5 17/12 =1.416. The value of √2 was calculated to 137,438,953,444 decimal places by Yasumasa Kanadas team in 1997, in February 2006 the record for the calculation of √2 was eclipsed with the use of a home computer. Shigeru Kondo calculated 1 trillion decimal places in 2010, for a development of this record, see the table below. Among mathematical constants with computationally challenging decimal expansions, only π has been calculated more precisely, such computations aim to check empirically whether such numbers are normal
2.
Square root of 2
–
The square root of 2, or the th power of 2, written in mathematics as √2 or 2 1⁄2, is the positive algebraic number that, when multiplied by itself, gives the number 2. Technically, it is called the square root of 2. Geometrically the square root of 2 is the length of a diagonal across a square sides of one unit of length. It was probably the first number known to be irrational, the rational approximation of the square root of two,665, 857/470,832, derived from the fourth step in the Babylonian algorithm starting with a0 =1, is too large by approx. 1. 6×10−12, its square is 2. 0000000000045… The rational approximation 99/70 is frequently used, despite having a denominator of only 70, it differs from the correct value by less than 1/10,000. The numerical value for the root of two, truncated to 65 decimal places, is,1. 41421356237309504880168872420969807856967187537694807317667973799….41421296 ¯. That is,1 +13 +13 ×4 −13 ×4 ×34 =577408 =1.4142156862745098039 ¯. This approximation is the seventh in a sequence of increasingly accurate approximations based on the sequence of Pell numbers, despite having a smaller denominator, it is only slightly less accurate than the Babylonian approximation. Pythagoreans discovered that the diagonal of a square is incommensurable with its side, or in modern language, little is known with certainty about the time or circumstances of this discovery, but the name of Hippasus of Metapontum is often mentioned. For a while, the Pythagoreans treated as a secret the discovery that the square root of two is irrational, and, according to legend, Hippasus was murdered for divulging it. The square root of two is occasionally called Pythagoras number or Pythagoras constant, for example by Conway & Guy, there are a number of algorithms for approximating √2, which in expressions as a ratio of integers or as a decimal can only be approximated. The most common algorithm for this, one used as a basis in many computers and calculators, is the Babylonian method of computing square roots, which is one of many methods of computing square roots. It goes as follows, First, pick a guess, a0 >0, then, using that guess, iterate through the following recursive computation, a n +1 = a n +2 a n 2 = a n 2 +1 a n. The more iterations through the algorithm, the approximation of the square root of 2 is achieved. Each iteration approximately doubles the number of correct digits, starting with a0 =1 the next approximations are 3/2 =1.5 17/12 =1.416. The value of √2 was calculated to 137,438,953,444 decimal places by Yasumasa Kanadas team in 1997, in February 2006 the record for the calculation of √2 was eclipsed with the use of a home computer. Shigeru Kondo calculated 1 trillion decimal places in 2010, for a development of this record, see the table below. Among mathematical constants with computationally challenging decimal expansions, only π has been calculated more precisely, such computations aim to check empirically whether such numbers are normal
3.
Decimal
–
This article aims to be an accessible introduction. For the mathematical definition, see Decimal representation, the decimal numeral system has ten as its base, which, in decimal, is written 10, as is the base in every positional numeral system. It is the base most widely used by modern civilizations. Decimal fractions have terminating decimal representations and other fractions have repeating decimal representations, Decimal notation is the writing of numbers in a base-ten numeral system. Examples are Brahmi numerals, Greek numerals, Hebrew numerals, Roman numerals, Roman numerals have symbols for the decimal powers and secondary symbols for half these values. Brahmi numerals have symbols for the nine numbers 1–9, the nine decades 10–90, plus a symbol for 100, Chinese numerals have symbols for 1–9, and additional symbols for powers of ten, which in modern usage reach 1072. Positional decimal systems include a zero and use symbols for the ten values to represent any number, positional notation uses positions for each power of ten, units, tens, hundreds, thousands, etc. The position of each digit within a number denotes the multiplier multiplied with that position has a value ten times that of the position to its right. There were at least two independent sources of positional decimal systems in ancient civilization, the Chinese counting rod system. Ten is the number which is the count of fingers and thumbs on both hands, the English word digit as well as its translation in many languages is also the anatomical term for fingers and toes. In English, decimal means tenth, decimate means reduce by a tenth, however, the symbols used in different areas are not identical, for instance, Western Arabic numerals differ from the forms used by other Arab cultures. A decimal fraction is a fraction the denominator of which is a power of ten. g, Decimal fractions 8/10, 1489/100, 24/100000, and 58900/10000 are expressed in decimal notation as 0.8,14.89,0.00024,5.8900 respectively. In English-speaking, some Latin American and many Asian countries, a period or raised period is used as the separator, in many other countries, particularly in Europe. The integer part, or integral part of a number is the part to the left of the decimal separator. The part from the separator to the right is the fractional part. It is usual for a number that consists only of a fractional part to have a leading zero in its notation. Any rational number with a denominator whose only prime factors are 2 and/or 5 may be expressed as a decimal fraction and has a finite decimal expansion. 1/2 =0.5 1/20 =0.05 1/5 =0.2 1/50 =0.02 1/4 =0.25 1/40 =0.025 1/25 =0.04 1/8 =0.125 1/125 =0.008 1/10 =0
4.
Algorithm
–
In mathematics and computer science, an algorithm is a self-contained sequence of actions to be performed. Algorithms can perform calculation, data processing and automated reasoning tasks, an algorithm is an effective method that can be expressed within a finite amount of space and time and in a well-defined formal language for calculating a function. The transition from one state to the next is not necessarily deterministic, some algorithms, known as randomized algorithms, giving a formal definition of algorithms, corresponding to the intuitive notion, remains a challenging problem. In English, it was first used in about 1230 and then by Chaucer in 1391, English adopted the French term, but it wasnt until the late 19th century that algorithm took on the meaning that it has in modern English. Another early use of the word is from 1240, in a manual titled Carmen de Algorismo composed by Alexandre de Villedieu and it begins thus, Haec algorismus ars praesens dicitur, in qua / Talibus Indorum fruimur bis quinque figuris. Which translates as, Algorism is the art by which at present we use those Indian figures, the poem is a few hundred lines long and summarizes the art of calculating with the new style of Indian dice, or Talibus Indorum, or Hindu numerals. An informal definition could be a set of rules that precisely defines a sequence of operations, which would include all computer programs, including programs that do not perform numeric calculations. Generally, a program is only an algorithm if it stops eventually, but humans can do something equally useful, in the case of certain enumerably infinite sets, They can give explicit instructions for determining the nth member of the set, for arbitrary finite n. An enumerably infinite set is one whose elements can be put into one-to-one correspondence with the integers, the concept of algorithm is also used to define the notion of decidability. That notion is central for explaining how formal systems come into being starting from a set of axioms. In logic, the time that an algorithm requires to complete cannot be measured, from such uncertainties, that characterize ongoing work, stems the unavailability of a definition of algorithm that suits both concrete and abstract usage of the term. Algorithms are essential to the way computers process data, thus, an algorithm can be considered to be any sequence of operations that can be simulated by a Turing-complete system. Although this may seem extreme, the arguments, in its favor are hard to refute. Gurevich. Turings informal argument in favor of his thesis justifies a stronger thesis, according to Savage, an algorithm is a computational process defined by a Turing machine. Typically, when an algorithm is associated with processing information, data can be read from a source, written to an output device. Stored data are regarded as part of the state of the entity performing the algorithm. In practice, the state is stored in one or more data structures, for some such computational process, the algorithm must be rigorously defined, specified in the way it applies in all possible circumstances that could arise. That is, any conditional steps must be dealt with, case-by-case
5.
Mathematical analysis
–
Mathematical analysis is the branch of mathematics dealing with limits and related theories, such as differentiation, integration, measure, infinite series, and analytic functions. These theories are studied in the context of real and complex numbers. Analysis evolved from calculus, which involves the elementary concepts and techniques of analysis, analysis may be distinguished from geometry, however, it can be applied to any space of mathematical objects that has a definition of nearness or specific distances between objects. Mathematical analysis formally developed in the 17th century during the Scientific Revolution, early results in analysis were implicitly present in the early days of ancient Greek mathematics. For instance, a geometric sum is implicit in Zenos paradox of the dichotomy. The explicit use of infinitesimals appears in Archimedes The Method of Mechanical Theorems, in Asia, the Chinese mathematician Liu Hui used the method of exhaustion in the 3rd century AD to find the area of a circle. Zu Chongzhi established a method that would later be called Cavalieris principle to find the volume of a sphere in the 5th century, the Indian mathematician Bhāskara II gave examples of the derivative and used what is now known as Rolles theorem in the 12th century. In the 14th century, Madhava of Sangamagrama developed infinite series expansions, like the power series and his followers at the Kerala school of astronomy and mathematics further expanded his works, up to the 16th century. The modern foundations of analysis were established in 17th century Europe. During this period, calculus techniques were applied to approximate discrete problems by continuous ones, in the 18th century, Euler introduced the notion of mathematical function. Real analysis began to emerge as an independent subject when Bernard Bolzano introduced the definition of continuity in 1816. In 1821, Cauchy began to put calculus on a firm logical foundation by rejecting the principle of the generality of algebra widely used in earlier work, instead, Cauchy formulated calculus in terms of geometric ideas and infinitesimals. Thus, his definition of continuity required a change in x to correspond to an infinitesimal change in y. He also introduced the concept of the Cauchy sequence, and started the theory of complex analysis. Poisson, Liouville, Fourier and others studied partial differential equations, the contributions of these mathematicians and others, such as Weierstrass, developed the -definition of limit approach, thus founding the modern field of mathematical analysis. In the middle of the 19th century Riemann introduced his theory of integration, the last third of the century saw the arithmetization of analysis by Weierstrass, who thought that geometric reasoning was inherently misleading, and introduced the epsilon-delta definition of limit. Then, mathematicians started worrying that they were assuming the existence of a continuum of numbers without proof. Around that time, the attempts to refine the theorems of Riemann integration led to the study of the size of the set of discontinuities of real functions, also, monsters began to be investigated
6.
Discrete mathematics
–
Discrete mathematics is the study of mathematical structures that are fundamentally discrete rather than continuous. Discrete mathematics therefore excludes topics in mathematics such as calculus. Discrete objects can often be enumerated by integers, more formally, discrete mathematics has been characterized as the branch of mathematics dealing with countable sets. However, there is no definition of the term discrete mathematics. Indeed, discrete mathematics is described less by what is included than by what is excluded, continuously varying quantities, the set of objects studied in discrete mathematics can be finite or infinite. The term finite mathematics is sometimes applied to parts of the field of mathematics that deals with finite sets. Conversely, computer implementations are significant in applying ideas from mathematics to real-world problems. Although the main objects of study in mathematics are discrete objects. In university curricula, Discrete Mathematics appeared in the 1980s, initially as a computer science support course, some high-school-level discrete mathematics textbooks have appeared as well. At this level, discrete mathematics is seen as a preparatory course. The Fulkerson Prize is awarded for outstanding papers in discrete mathematics, the history of discrete mathematics has involved a number of challenging problems which have focused attention within areas of the field. In graph theory, much research was motivated by attempts to prove the four color theorem, first stated in 1852, in logic, the second problem on David Hilberts list of open problems presented in 1900 was to prove that the axioms of arithmetic are consistent. Gödels second incompleteness theorem, proved in 1931, showed that this was not possible – at least not within arithmetic itself, Hilberts tenth problem was to determine whether a given polynomial Diophantine equation with integer coefficients has an integer solution. In 1970, Yuri Matiyasevich proved that this could not be done, at the same time, military requirements motivated advances in operations research. The Cold War meant that cryptography remained important, with fundamental advances such as public-key cryptography being developed in the following decades, operations research remained important as a tool in business and project management, with the critical path method being developed in the 1950s. The telecommunication industry has also motivated advances in mathematics, particularly in graph theory. Formal verification of statements in logic has been necessary for development of safety-critical systems. Computational geometry has been an important part of the computer graphics incorporated into modern video games, currently, one of the most famous open problems in theoretical computer science is the P = NP problem, which involves the relationship between the complexity classes P and NP
7.
Diagonal
–
In geometry, a diagonal is a line segment joining two vertices of a polygon or polyhedron, when those vertices are not on the same edge. Informally, any sloping line is called diagonal, in matrix algebra, a diagonal of a square matrix is a set of entries extending from one corner to the farthest corner. There are also other, non-mathematical uses, diagonal pliers are wire-cutting pliers defined by the cutting edges of the jaws intersects the joint rivet at an angle or on a diagonal, hence the name. A diagonal lashing is a type of lashing used to bind spars or poles together applied so that the cross over the poles at an angle. In association football, the system of control is the method referees. As applied to a polygon, a diagonal is a line segment joining any two non-consecutive vertices, therefore, a quadrilateral has two diagonals, joining opposite pairs of vertices. For any convex polygon, all the diagonals are inside the polygon, in a convex polygon, if no three diagonals are concurrent at a single point, the number of regions that the diagonals divide the interior into is given by + =24. The number of regions is 1,4,11,25,50,91,154,246, in a polygon with n angles the number of diagonals is given by n ∗2. The number of intersections between the diagonals is given by, in the case of a square matrix, the main or principal diagonal is the diagonal line of entries running from the top-left corner to the bottom-right corner. For a matrix A with row index specified by i and column index specified by j, the off-diagonal entries are those not on the main diagonal. A diagonal matrix is one whose off-diagonal entries are all zero, a superdiagonal entry is one that is directly above and to the right of the main diagonal. Just as diagonal entries are those A i j with j = i and this plays an important part in geometry, for example, the fixed points of a mapping F from X to itself may be obtained by intersecting the graph of F with the diagonal. In geometric studies, the idea of intersecting the diagonal with itself is common, not directly and this is related at a deep level with the Euler characteristic and the zeros of vector fields. For example, the circle S1 has Betti numbers 1,1,0,0,0, a geometric way of expressing this is to look at the diagonal on the two-torus S1xS1 and observe that it can move off itself by the small motion to. Topics In Algebra, Waltham, Blaisdell Publishing Company, ISBN 978-1114541016 Nering, linear Algebra and Matrix Theory, New York, Wiley, LCCN76091646 Diagonals of a polygon with interactive animation Polygon diagonal from MathWorld. Diagonal of a matrix from MathWorld
8.
Unit square
–
In mathematics, a unit square is a square whose sides have length 1. Often, the unit square refers specifically to the square in the Cartesian plane with corners at the four points, and. In a Cartesian coordinate system with coordinates the unit square is defined as the square consisting of the points where x and y lie in a closed unit interval from 0 to 1. That is, the square is the Cartesian product I × I. The unit square can also be thought of as a subset of the complex plane, in this view, the four corners of the unit square are at the four complex numbers 0,1, i, and 1 + i. It is not known whether any point in the plane is a distance from all four vertices of the unit square. However, no point is on an edge of the square. Unit circle Unit sphere Unit cube Weisstein, Eric W. Unit square
9.
Square root
–
In mathematics, a square root of a number a is a number y such that y2 = a, in other words, a number y whose square is a. For example,4 and −4 are square roots of 16 because 42 =2 =16, every nonnegative real number a has a unique nonnegative square root, called the principal square root, which is denoted by √a, where √ is called the radical sign or radix. For example, the square root of 9 is 3, denoted √9 =3. The term whose root is being considered is known as the radicand, the radicand is the number or expression underneath the radical sign, in this example 9. Every positive number a has two roots, √a, which is positive, and −√a, which is negative. Together, these two roots are denoted ± √a, although the principal square root of a positive number is only one of its two square roots, the designation the square root is often used to refer to the principal square root. For positive a, the square root can also be written in exponent notation. Square roots of numbers can be discussed within the framework of complex numbers. In Ancient India, the knowledge of theoretical and applied aspects of square and square root was at least as old as the Sulba Sutras, a method for finding very good approximations to the square roots of 2 and 3 are given in the Baudhayana Sulba Sutra. Aryabhata in the Aryabhatiya, has given a method for finding the root of numbers having many digits. It was known to the ancient Greeks that square roots of positive numbers that are not perfect squares are always irrational numbers, numbers not expressible as a ratio of two integers. This is the theorem Euclid X,9 almost certainly due to Theaetetus dating back to circa 380 BC, the particular case √2 is assumed to date back earlier to the Pythagoreans and is traditionally attributed to Hippasus. Mahāvīra, a 9th-century Indian mathematician, was the first to state that square roots of negative numbers do not exist, a symbol for square roots, written as an elaborate R, was invented by Regiomontanus. An R was also used for Radix to indicate square roots in Gerolamo Cardanos Ars Magna, according to historian of mathematics D. E. Smith, Aryabhatas method for finding the root was first introduced in Europe by Cataneo in 1546. According to Jeffrey A. Oaks, Arabs used the letter jīm/ĝīm, the letter jīm resembles the present square root shape. Its usage goes as far as the end of the century in the works of the Moroccan mathematician Ibn al-Yasamin. The symbol √ for the root was first used in print in 1525 in Christoph Rudolffs Coss
10.
Astronomy
–
Astronomy is a natural science that studies celestial objects and phenomena. It applies mathematics, physics, and chemistry, in an effort to explain the origin of those objects and phenomena and their evolution. Objects of interest include planets, moons, stars, galaxies, and comets, while the phenomena include supernovae explosions, gamma ray bursts, more generally, all astronomical phenomena that originate outside Earths atmosphere are within the purview of astronomy. A related but distinct subject, physical cosmology, is concerned with the study of the Universe as a whole, Astronomy is the oldest of the natural sciences. The early civilizations in recorded history, such as the Babylonians, Greeks, Indians, Egyptians, Nubians, Iranians, Chinese, during the 20th century, the field of professional astronomy split into observational and theoretical branches. Observational astronomy is focused on acquiring data from observations of astronomical objects, theoretical astronomy is oriented toward the development of computer or analytical models to describe astronomical objects and phenomena. The two fields complement each other, with theoretical astronomy seeking to explain the results and observations being used to confirm theoretical results. Astronomy is one of the few sciences where amateurs can play an active role, especially in the discovery. Amateur astronomers have made and contributed to many important astronomical discoveries, Astronomy means law of the stars. Astronomy should not be confused with astrology, the system which claims that human affairs are correlated with the positions of celestial objects. Although the two share a common origin, they are now entirely distinct. Generally, either the term astronomy or astrophysics may be used to refer to this subject, however, since most modern astronomical research deals with subjects related to physics, modern astronomy could actually be called astrophysics. Few fields, such as astrometry, are purely astronomy rather than also astrophysics, some titles of the leading scientific journals in this field includeThe Astronomical Journal, The Astrophysical Journal and Astronomy and Astrophysics. In early times, astronomy only comprised the observation and predictions of the motions of objects visible to the naked eye, in some locations, early cultures assembled massive artifacts that possibly had some astronomical purpose. Before tools such as the telescope were invented, early study of the stars was conducted using the naked eye, most of early astronomy actually consisted of mapping the positions of the stars and planets, a science now referred to as astrometry. From these observations, early ideas about the motions of the planets were formed, and the nature of the Sun, Moon, the Earth was believed to be the center of the Universe with the Sun, the Moon and the stars rotating around it. This is known as the model of the Universe, or the Ptolemaic system. The Babylonians discovered that lunar eclipses recurred in a cycle known as a saros
11.
Carpentry
–
Carpentry in the United States is almost always done by men. With 98. 5% of carpenters being male, it was the fourth most male-dominated occupation in the country in 1999, Carpenters are usually the first tradesmen on a job and the last to leave. Carpenters normally framed post-and-beam buildings until the end of the 19th century and it is also common that the skill can be learned by gaining work experience other than a formal training program, which may be the case in many places. The word carpenter is the English rendering of the Old French word carpentier which is derived from the Latin carpentrius, the Middle English and Scots word was wright, which could be used in compound forms such as wheelwright or boatwright. An easy way to envisage this is that first fix work is all that is done before plastering takes place, second fix is done after plastering takes place. Second fix work, the construction of such as skirting boards, architraves. Carpentry is also used to construct the formwork into which concrete is poured during the building of such as roads. In the UK, the skill of making timber formwork for poured, or in situ, although the. work of a carpenter and joiner are often combined. Joiner is less common than the finish carpenter or cabinetmaker. The terms housewright and barnwright were used historically, now used by carpenters who work using traditional methods. Someone who builds custom concrete formwork is a form carpenter, wood is one of mankinds oldest building materials. The ability to shape wood improved with technological advances from the age to the bronze age to the iron age. The oldest surviving, complete text is Vitruvius ten books collectively titled De architectura which discusses some carpentry. By the 16th century sawmills were coming into use in Europe, the founding of America was partly based on a desire to extract resources from the new continent including wood for use in ships and buildings in Europe. In the 18th century part of the Industrial Revolution was the invention of the steam engine and these technologies combined with the invention of the circular saw led to the development of balloon framing which was the beginning of the decline of traditional timber framing. The 19th century saw the development of engineering and distribution which allowed the development of hand-held power tools, wire nails. In the 20th century portland cement came into use and concrete foundations allowed carpenters to do away with heavy timber sills. Also, drywall came into common use replacing lime plaster on wooden lath, plywood, engineered lumber and chemically treated lumber also came into use
12.
Babylonian mathematics
–
Babylonian mathematics was any mathematics developed or practiced by the people of Mesopotamia, from the days of the early Sumerians to the fall of Babylon in 539 BC. Babylonian mathematical texts are plentiful and well edited, in respect of time they fall in two distinct groups, one from the Old Babylonian period, the other mainly Seleucid from the last three or four centuries BC. In respect of content there is any difference between the two groups of texts. Thus Babylonian mathematics remained constant, in character and content, for two millennia. In contrast to the scarcity of sources in Egyptian mathematics, our knowledge of Babylonian mathematics is derived from some 400 clay tablets unearthed since the 1850s. Written in Cuneiform script, tablets were inscribed while the clay was moist, the majority of recovered clay tablets date from 1800 to 1600 BCE, and cover topics that include fractions, algebra, quadratic and cubic equations and the Pythagorean theorem. The Babylonian tablet YBC7289 gives an approximation to 2 accurate to three significant sexagesimal digits, Babylonian mathematics is a range of numeric and more advanced mathematical practices in the ancient Near East, written in cuneiform script. Study has historically focused on the Old Babylonian period in the second millennium BC due to the wealth of data available. There has been debate over the earliest appearance of Babylonian mathematics, Babylonian mathematics was primarily written on clay tablets in cuneiform script in the Akkadian or Sumerian languages. Babylonian mathematics is perhaps an unhelpful term since the earliest suggested origins date to the use of accounting devices, such as bullae and tokens, the Babylonian system of mathematics was sexagesimal numeral system. From this we derive the modern day usage of 60 seconds in a minute,60 minutes in an hour, the Babylonians were able to make great advances in mathematics for two reasons. Firstly, the number 60 is a highly composite number, having factors of 1,2,3,4,5,6,10,12,15,20,30,60. Additionally, unlike the Egyptians and Romans, the Babylonians had a true place-value system, the ancient Sumerians of Mesopotamia developed a complex system of metrology from 3000 BC. From 2600 BC onwards, the Sumerians wrote multiplication tables on clay tablets and dealt with geometrical exercises, the earliest traces of the Babylonian numerals also date back to this period. Most clay tablets that describe Babylonian mathematics belong to the Old Babylonian, some clay tablets contain mathematical lists and tables, others contain problems and worked solutions. The Babylonians used pre-calculated tables to assist with arithmetic, for example, two tablets found at Senkerah on the Euphrates in 1854, dating from 2000 BC, give lists of the squares of numbers up to 59 and the cubes of numbers up to 32. The Babylonians used the lists of squares together with the formulae a b =2 − a 2 − b 22 a b =2 −24 to simplify multiplication, the Babylonians did not have an algorithm for long division. Instead they based their method on the fact that a b = a ×1 b together with a table of reciprocals
13.
Ordinary differential equation
–
In mathematics, an ordinary differential equation is a differential equation containing one or more functions of one independent variable and its derivatives. The term ordinary is used in contrast with the partial differential equation which may be with respect to more than one independent variable. ODEs that are linear equations have exact closed-form solutions that can be added and multiplied by coefficients. Graphical and numerical methods, applied by hand or by computer, may approximate solutions of ODEs and perhaps yield useful information, often sufficing in the absence of exact, Ordinary differential equations arise in many contexts of mathematics and science. Mathematical descriptions of change use differentials and derivatives, often, quantities are defined as the rate of change of other quantities, or gradients of quantities, which is how they enter differential equations. Specific mathematical fields include geometry and analytical mechanics, scientific fields include much of physics and astronomy, meteorology, chemistry, biology, ecology and population modelling, economics. Many mathematicians have studied differential equations and contributed to the field, including Newton, Leibniz, the Bernoulli family, Riccati, Clairaut, dAlembert, in general, F is a function of the position x of the particle at time t. The unknown function x appears on both sides of the equation, and is indicated in the notation F. In what follows, let y be a dependent variable and x an independent variable, the notation for differentiation varies depending upon the author and upon which notation is most useful for the task at hand. Given F, a function of x, y, and derivatives of y, then an equation of the form F = y is called an explicit ordinary differential equation of order n. The function r is called the term, leading to two further important classifications, Homogeneous If r =0, and consequently one automatic solution is the trivial solution. The solution of a homogeneous equation is a complementary function. The additional solution to the function is the particular integral. The general solution to an equation can be written as y = yc + yp. Non-linear A differential equation that cannot be written in the form of a linear combination, a number of coupled differential equations form a system of equations. In column vector form, = These are not necessarily linear, the implicit analogue is, F =0 where 0 = is the zero vector. In the same sources, implicit ODE systems with a singular Jacobian are termed differential algebraic equations and this distinction is not merely one of terminology, DAEs have fundamentally different characteristics and are generally more involved to solve than ODE systems. Given a differential equation F =0 a function u, I ⊂ R → R is called the solution or integral curve for F, if u is n-times differentiable on I, and F =0 x ∈ I
14.
Stochastic differential equation
–
A stochastic differential equation is a differential equation in which one or more of the terms is a stochastic process, resulting in a solution which is also a stochastic process. SDEs are used to various phenomena such as unstable stock prices or physical systems subject to thermal fluctuations. Typically, SDEs contain a variable which represents random white noise calculated as the derivative of Brownian motion or the Wiener process, however, other types of random behaviour are possible, such as jump processes. Early work on SDEs was done to describe Brownian motion in Einsteins famous paper, however, one of the earlier works related to Brownian motion is credited to Bachelier in his thesis Theory of Speculation. This work was followed upon by Langevin, later Itô and Stratonovich put SDEs on more solid mathematical footing. In physical science, SDEs are usually written as Langevin equations and these are sometimes ambiguously called the Langevin equation even though there are many possible forms. Those forms consist of a differential equation containing a deterministic function. A second form includes the Smoluchowski equation or the Fokker-Planck equation and these are partial differential equations which describe the time evolution of probability distribution functions. The third form is the Itô stochastic differential equation, which is most frequently used in mathematics and this is similar to the Langevin form, but it is usually written in differential notation. SDEs are denoted in two varieties, corresponding to two versions of stochastic calculus, Brownian motion or the Wiener process was discovered to be exceptionally complex mathematically. The Wiener process is almost surely nowhere differentiable, thus, it requires its own rules of calculus, there are two dominating versions of stochastic calculus, the Itô stochastic calculus and the Stratonovich stochastic calculus. Each of the two has advantages and disadvantages, and newcomers are often confused whether the one is more appropriate than the other in a given situation, guidelines exist and conveniently, one can readily convert an Itô SDE to an equivalent Stratonovich SDE and back again. Still, one must be careful which calculus to use when the SDE is initially written down, Numerical solution of stochastic differential equations and especially stochastic partial differential equations is a young field relatively speaking. Almost all algorithms that are used for the solution of differential equations will work very poorly for SDEs. A textbook describing many different algorithms is Kloeden & Platen, Methods include the Euler–Maruyama method, Milstein method and Runge–Kutta method. In physics, SDEs are typically written in the Langevin form and this form is usually usable because there are standard techniques for transforming higher-order equations into several coupled first-order equations by introducing new unknowns. If the g i are constants, the system is said to be subject to noise, otherwise it is said to be subject to multiplicative noise. This term is misleading as it has come to mean the general case even though it appears to imply the limited case in which g ∝ x
15.
Markov chain
–
In probability theory and related fields, a Markov process, named after the Russian mathematician Andrey Markov, is a stochastic process that satisfies the Markov property. e. Conditional on the present state of the system, its future, a Markov chain is a type of Markov process that has either discrete state space or discrete index set, but the precise definition of a Markov chain varies. Andrey Markov studied Markov processes in the early 20th century, publishing his first paper on the topic in 1906, random walks on the integers and the Gamblers ruin problem are examples of Markov processes and were studied hundreds of years earlier. These two processes are Markov processes in time, while random walks on the integers and the Gamblers ruin problem are examples of Markov processes in discrete time. The algorithm known as PageRank, which was proposed for the internet search engine Google, is based on a Markov process. The adjective Markovian is used to something that is related to a Markov process. A Markov chain is a process with the Markov property. The term Markov chain refers to the sequence of variables such a process moves through. It can thus be used for describing systems that follow a chain of linked events, the systems state space and time parameter index need to be specified. In addition, there are extensions of Markov processes that are referred to as such. Moreover, the index need not necessarily be real-valued, like with the state space. Notice that the state space continuous-time Markov chain is general to such a degree that it has no designated term. While the time parameter is usually discrete, the space of a Markov chain does not have any generally agreed-on restrictions. However, many applications of Markov chains employ finite or countably infinite state spaces, besides time-index and state-space parameters, there are many other variations, extensions and generalizations. For simplicity, most of this article concentrates on the discrete-time, discrete state-space case, the changes of state of the system are called transitions. The probabilities associated with state changes are called transition probabilities. The process is characterized by a space, a transition matrix describing the probabilities of particular transitions. By convention, we assume all possible states and transitions have been included in the definition of the process, so there is always a next state, and the process does not terminate
16.
Numerical method
–
In numerical analysis, a numerical method is a mathematical tool designed to solve numerical problems. The implementation of a method with an appropriate convergence check in a programming language is called a numerical algorithm. Let F =0 be a problem, i. e. The problems of which the method consists need not be well-posed, if they are, the method is said to be stable or well-posed. Necessary conditions for a method to effectively approximate F =0 are that x n → x. So, a numerical method is called consistent if and only if the sequence of functions n ∈ N pointlwise converges to F on the set S of its solutions. When F n = F, ∀ n ∈ N on S the method is said to be strictly consistent. Denote by ℓ n a sequence of admissible perturbations of x ∈ X for some numerical method M, one can easily prove that the point-wise convergence of n ∈ N to y implies the convergence of the associated method
17.
Interpolation
–
In the mathematical field of numerical analysis, interpolation is a method of constructing new data points within the range of a discrete set of known data points. It is often required to interpolate the value of that function for a value of the independent variable. A different problem which is related to interpolation is the approximation of a complicated function by a simple function. Suppose the formula for some given function is known, but too complex to evaluate efficiently, a few known data points from the original function can be used to create an interpolation based on a simpler function. In the examples below if we consider x as a topological space, the classical results about interpolation of operators are the Riesz–Thorin theorem and the Marcinkiewicz theorem. There are also many other subsequent results, for example, suppose we have a table like this, which gives some values of an unknown function f. Interpolation provides a means of estimating the function at intermediate points, there are many different interpolation methods, some of which are described below. Some of the concerns to take into account when choosing an appropriate algorithm are, how many data points are needed. The simplest interpolation method is to locate the nearest data value, one of the simplest methods is linear interpolation. Consider the above example of estimating f, since 2.5 is midway between 2 and 3, it is reasonable to take f midway between f =0.9093 and f =0.1411, which yields 0.5252. Another disadvantage is that the interpolant is not differentiable at the point xk, the following error estimate shows that linear interpolation is not very precise. Denote the function which we want to interpolate by g, then the linear interpolation error is | f − g | ≤ C2 where C =18 max r ∈ | g ″ |. In words, the error is proportional to the square of the distance between the data points, the error in some other methods, including polynomial interpolation and spline interpolation, is proportional to higher powers of the distance between the data points. These methods also produce smoother interpolants, polynomial interpolation is a generalization of linear interpolation. Note that the interpolant is a linear function. We now replace this interpolant with a polynomial of higher degree, consider again the problem given above. The following sixth degree polynomial goes through all the seven points, substituting x =2.5, we find that f =0.5965. Generally, if we have n points, there is exactly one polynomial of degree at most n−1 going through all the data points
18.
Differential equation
–
A differential equation is a mathematical equation that relates some function with its derivatives. In applications, the functions usually represent physical quantities, the derivatives represent their rates of change, because such relations are extremely common, differential equations play a prominent role in many disciplines including engineering, physics, economics, and biology. In pure mathematics, differential equations are studied from different perspectives. Only the simplest differential equations are solvable by explicit formulas, however, if a self-contained formula for the solution is not available, the solution may be numerically approximated using computers. Differential equations first came into existence with the invention of calculus by Newton, jacob Bernoulli proposed the Bernoulli differential equation in 1695. This is a differential equation of the form y ′ + P y = Q y n for which the following year Leibniz obtained solutions by simplifying it. Historically, the problem of a string such as that of a musical instrument was studied by Jean le Rond dAlembert, Leonhard Euler, Daniel Bernoulli. In 1746, d’Alembert discovered the wave equation, and within ten years Euler discovered the three-dimensional wave equation. The Euler–Lagrange equation was developed in the 1750s by Euler and Lagrange in connection with their studies of the tautochrone problem. This is the problem of determining a curve on which a particle will fall to a fixed point in a fixed amount of time. Lagrange solved this problem in 1755 and sent the solution to Euler, both further developed Lagranges method and applied it to mechanics, which led to the formulation of Lagrangian mechanics. Contained in this book was Fouriers proposal of his heat equation for conductive diffusion of heat and this partial differential equation is now taught to every student of mathematical physics. For example, in mechanics, the motion of a body is described by its position. Newtons laws allow one to express these variables dynamically as an equation for the unknown position of the body as a function of time. In some cases, this equation may be solved explicitly. An example of modelling a real world problem using differential equations is the determination of the velocity of a ball falling through the air, considering only gravity, the balls acceleration towards the ground is the acceleration due to gravity minus the acceleration due to air resistance. Gravity is considered constant, and air resistance may be modeled as proportional to the balls velocity and this means that the balls acceleration, which is a derivative of its velocity, depends on the velocity. Finding the velocity as a function of time involves solving a differential equation, Differential equations can be divided into several types
19.
Numerical weather prediction
–
Numerical weather prediction uses mathematical models of the atmosphere and oceans to predict the weather based on current weather conditions. Though first attempted in the 1920s, it was not until the advent of computer simulation in the 1950s that numerical weather predictions produced realistic results. Manipulating the vast datasets and performing the calculations necessary to modern numerical weather prediction requires some of the most powerful supercomputers in the world. Even with the power of supercomputers, the forecast skill of numerical weather models extends to only about six days. Factors affecting the accuracy of numerical predictions include the density and quality of observations used as input to the forecasts, post-processing techniques such as model output statistics have been developed to improve the handling of errors in numerical predictions. A more fundamental problem lies in the nature of the partial differential equations that govern the atmosphere. It is impossible to solve these equations exactly, and small errors grow with time, present understanding is that this chaotic behavior limits accurate forecasts to about 14 days even with perfectly accurate input data and a flawless model. This approach analyzes multiple forecasts created with an individual forecast model or multiple models and it was not until the advent of the computer and computer simulations that computation time was reduced to less than the forecast period itself. The ENIAC was used to create the first weather forecasts via computer in 1950, in 1954, Carl-Gustav Rossbys group at the Swedish Meteorological and Hydrological Institute used the same model to produce the first operational forecast. Operational numerical weather prediction in the United States began in 1955 under the Joint Numerical Weather Prediction Unit, in 1956, Norman Phillips developed a mathematical model which could realistically depict monthly and seasonal patterns in the troposphere, this became the first successful climate model. Following Phillips work, several groups working to create general circulation models. The first general circulation model that combined both oceanic and atmospheric processes was developed in the late 1960s at the NOAA Geophysical Fluid Dynamics Laboratory. As computers have more powerful, the size of the initial data sets has increased. These newer models include more physical processes in the simplifications of the equations of motion in numerical simulations of the atmosphere, in 1966, West Germany and the United States began producing operational forecasts based on primitive-equation models, followed by the United Kingdom in 1972 and Australia in 1977. The development of limited area models facilitated advances in forecasting the tracks of tropical cyclones as well as air quality in the 1970s and 1980s, by the early 1980s models began to include the interactions of soil and vegetation with the atmosphere, which led to more realistic forecasts. The output of forecast models based on atmospheric dynamics is unable to resolve some details of the weather near the Earths surface. As such, a relationship between the output of a numerical weather model and the ensuing conditions at the ground was developed in the 1970s and 1980s. The process of entering data into the model to generate initial conditions is called initialization
20.
Partial differential equation
–
In mathematics, a partial differential equation is a differential equation that contains unknown multivariable functions and their partial derivatives. PDEs are used to formulate problems involving functions of several variables, PDEs can be used to describe a wide variety of phenomena such as sound, heat, electrostatics, electrodynamics, fluid dynamics, elasticity, or quantum mechanics. These seemingly distinct physical phenomena can be formalised similarly in terms of PDEs, just as ordinary differential equations often model one-dimensional dynamical systems, partial differential equations often model multidimensional systems. PDEs find their generalisation in stochastic partial differential equations, Partial differential equations are equations that involve rates of change with respect to continuous variables. The dynamics for the body take place in a finite-dimensional configuration space. This distinction usually makes PDEs much harder to solve ordinary differential equations. Classic domains where PDEs are used include acoustics, fluid dynamics, electrodynamics, a partial differential equation for the function u is an equation of the form f =0. If f is a function of u and its derivatives. Common examples of linear PDEs include the equation, the wave equation, Laplaces equation, Helmholtz equation, Klein–Gordon equation. A relatively simple PDE is ∂ u ∂ x =0 and this relation implies that the function u is independent of x. However, the equation gives no information on the dependence on the variable y. Hence the general solution of equation is u = f. The analogous ordinary differential equation is d u d x =0, which has the solution u = c and these two examples illustrate that general solutions of ordinary differential equations involve arbitrary constants, but solutions of PDEs involve arbitrary functions. A solution of a PDE is generally not unique, additional conditions must generally be specified on the boundary of the region where the solution is defined. For instance, in the example above, the function f can be determined if u is specified on the line x =0. Even if the solution of a differential equation exists and is unique. The mathematical study of questions is usually in the more powerful context of weak solutions. The derivative of u with respect to y approaches 0 uniformly in x as n increases and this solution approaches infinity if nx is not an integer multiple of π for any non-zero value of y
21.
Hedge fund
–
It is administered by a professional investment management firm, and often structured as a limited partnership, limited liability company, or similar vehicle. The name hedge fund originated from the long and short positions that the first of these funds used to hedge market risk. Over time, the types and nature of the hedging concepts expanded, today, hedge funds engage in a diverse range of markets and strategies and employ a wide variety of financial instruments and risk management techniques. Hedge funds are available only to certain accredited investors and cannot be offered or sold to the general public. Hedge funds have existed for decades, and became increasingly popular, growing to be a substantial fraction of asset management. Hedge funds are almost always open-ended and allow additions or withdrawals by their investors, the value of an investors holding is directly related to the fund net asset value. Many hedge fund investment strategies aim to achieve a return on investment regardless of whether markets are rising or falling. Hedge fund managers often invest money of their own in the fund they manage, a hedge fund typically pays its investment manager an annual management fee, and a performance fee. Both co-investment and performance fees serve to align the interests of managers with those of the investors in the fund, some hedge funds have several billion dollars of assets under management. The word hedge, meaning a line of bushes around a field, has long used as a metaphor for the placing of limits on risk. Early hedge funds sought to hedge specific investments against general market fluctuations by shorting the market, nowadays, however, many different investment strategies are used, many of which do not hedge risk. During the US bull market of the 1920s, there were numerous private investment vehicles available to wealthy investors. The sociologist Alfred W. Jones is credited with coining the phrase hedged fund and is credited with creating the first hedge fund structure in 1949, although this has been disputed. Jones referred to his fund as being hedged, a term commonly used on Wall Street to describe the management of investment risk due to changes in the financial markets. In the 1970s, hedge funds specialized in a single strategy, many hedge funds closed during the recession of 1969–70 and the 1973–1974 stock market crash due to heavy losses. They received renewed attention in the late 1980s, during the 1990s, the number of hedge funds increased significantly, funded with wealth created during the 1990s stock market rise. The increased interest was due to the aligned-interest compensation structure and the promise of high returns. Over the next decade hedge fund strategies expanded to include, credit arbitrage, distressed debt, fixed income, quantitative, US institutional investors such as pension and endowment funds began allocating greater portions of their portfolios to hedge funds
22.
Operations research
–
Operations research, or operational research in British usage, is a discipline that deals with the application of advanced analytical methods to help make better decisions. Further, the operational analysis is used in the British military, as an intrinsic part of capability development, management. In particular, operational analysis forms part of the Combined Operational Effectiveness and Investment Appraisals and it is often considered to be a sub-field of applied mathematics. The terms management science and decision science are used as synonyms. Operation research is concerned with determining the maximum or minimum of some real-world objective. Originating in military efforts before World War II, its techniques have grown to concern problems in a variety of industries, nearly all of these techniques involve the construction of mathematical models that attempt to describe the system. Because of the computational and statistical nature of most of these fields, OR also has ties to computer science. In the decades after the two wars, the techniques were more widely applied to problems in business, industry. Early work in research was carried out by individuals such as Charles Babbage. Percy Bridgman brought operational research to bear on problems in physics in the 1920s, modern operational research originated at the Bawdsey Research Station in the UK in 1937 and was the result of an initiative of the stations superintendent, A. P. Rowe. Rowe conceived the idea as a means to analyse and improve the working of the UKs early warning radar system, initially, he analysed the operating of the radar equipment and its communication networks, expanding later to include the operating personnels behaviour. This revealed unappreciated limitations of the CH network and allowed action to be taken. Scientists in the United Kingdom including Patrick Blackett, Cecil Gordon, Solly Zuckerman, other names for it included operational analysis and quantitative management. During the Second World War close to 1,000 men and women in Britain were engaged in operational research, about 200 operational research scientists worked for the British Army. Patrick Blackett worked for different organizations during the war. In 1941, Blackett moved from the RAE to the Navy, after first working with RAF Coastal Command, in 1941, blacketts team at Coastal Commands Operational Research Section included two future Nobel prize winners and many other people who went on to be pre-eminent in their fields. They undertook a number of analyses that aided the war effort. Convoys travel at the speed of the slowest member, so small convoys can travel faster and it was also argued that small convoys would be harder for German U-boats to detect
23.
Actuary
–
An actuary is a business professional who deals with the measurement and management of risk and uncertainty. The name of the profession is actuarial science. These risks can affect both sides of the sheet, and require asset management, liability management, and valuation skills. Actuaries provide assessments of financial security systems, with a focus on their complexity, their mathematics, Actuaries of the 21st century require analytical skills, business knowledge, and an understanding of human behavior and information systems to design and manage programs that control risk. The actual steps needed to become an actuary are usually country-specific, however, almost all processes share a rigorous schooling or examination structure, the profession has consistently been ranked as one of the most desirable. In various studies, being an actuary was ranked number one or two times since 2010. Actuaries use skills primarily in mathematics, particularly calculus-based probability and mathematical statistics, but also economics, computer science, finance, and business. Actuaries assemble and analyze data to estimate the probability and likely cost of the occurrence of an event such as death, sickness, injury, disability, most traditional actuarial disciplines fall into two main categories, life and non-life. Life actuaries, which include health and pension actuaries, primarily deal with mortality risk, morbidity risk, products prominent in their work include life insurance, annuities, pensions, short and long term disability insurance, health insurance, health savings accounts, and long-term care insurance. Non-life actuaries, also known as property and casualty or general insurance actuaries, Actuaries are also called upon for their expertise in enterprise risk management. This can involve dynamic financial analysis, stress testing, the formulation of corporate policy. Actuaries are also involved in areas of the financial services industry. On both the life and casualty sides, the function of actuaries is to calculate premiums. On the casualty side, this often involves quantifying the probability of a loss event, called the frequency. The amount of time that occurs before the event is important. On the life side, the analysis often involves quantifying how much a potential sum of money or a liability will be worth at different points in the future. Forecasting interest yields and currency movements also plays a role in determining future costs, Actuaries do not always attempt to predict aggregate future events. Often, their work may relate to determining the cost of financial liabilities that have occurred, called retrospective reinsurance
24.
Linear interpolation
–
In mathematics, linear interpolation is a method of curve fitting using linear polynomials to construct new data points within the range of a discrete set of known data points. If the two points are given by the coordinates and, the linear interpolant is the straight line between these points. It is a case of polynomial interpolation with n =1. Solving this equation for y, which is the value at x, gives y = y 0 + y 1 − y 0 x 1 − x 0. Outside this interval, the formula is identical to linear extrapolation and this formula can also be understood as a weighted average. The weights are inversely related to the distance from the end points to the unknown point, the closer point has more influence than the farther point. Thus, the weights are x − x 0 x 1 − x 0 and x 1 − x x 1 − x 0, which are normalized distances between the unknown point and each of the end points. Because these sum to 1, y = y 0 + y 1 = y 0 + y 1, Linear interpolation on a set of data points. Is defined as the concatenation of linear interpolants between each pair of data points and this results in a continuous curve, with a discontinuous derivative, thus of differentiability class C0. Linear interpolation is used to approximate a value of some function f using two known values of that function at other points. The error of approximation is defined as R T = f − p. It can be proven using Rolles theorem that if f has a second derivative. That is, the approximation between two points on a function gets worse with the second derivative of the function that is approximated. This is intuitively correct as well, the curvier the function is, Linear interpolation is often used to fill the gaps in a table. Suppose that one has a table listing the population of country in 1970,1980,1990 and 2000. Linear interpolation is a way to do this. The basic operation of linear interpolation between two values is used in computer graphics. In that fields jargon it is called a lerp
25.
Newton's method
–
In numerical analysis, Newtons method, named after Isaac Newton and Joseph Raphson, is a method for finding successively better approximations to the roots of a real-valued function. If the function satisfies the assumptions made in the derivation of the formula, geometrically, is the intersection of the x-axis and the tangent of the graph of f at. The process is repeated as x n +1 = x n − f f ′ until an accurate value is reached. This algorithm is first in the class of Householders methods, succeeded by Halleys method, the method can also be extended to complex functions and to systems of equations. This x-intercept will typically be an approximation to the functions root than the original guess. Suppose f, → ℝ is a function defined on the interval with values in the real numbers ℝ. The formula for converging on the root can be easily derived, suppose we have some current approximation xn. Then we can derive the formula for an approximation, xn +1 by referring to the diagram on the right. The equation of the tangent line to the curve y = f at the point x = xn is y = f ′ + f, the x-intercept of this line is then used as the next approximation to the root, xn +1. In other words, setting y to zero and x to xn +1 gives 0 = f ′ + f, Solving for xn +1 gives x n +1 = x n − f f ′. We start the process off with some arbitrary initial value x0, the method will usually converge, provided this initial guess is close enough to the unknown zero, and that f ′ ≠0. More details can be found in the section below. The Householders methods are similar but have higher order for even faster convergence, however, his method differs substantially from the modern method given above, Newton applies the method only to polynomials. He does not compute the successive approximations xn, but computes a sequence of polynomials, finally, Newton views the method as purely algebraic and makes no mention of the connection with calculus. Newton may have derived his method from a similar but less precise method by Vieta, a special case of Newtons method for calculating square roots was known much earlier and is often called the Babylonian method. Newtons method was used by 17th-century Japanese mathematician Seki Kōwa to solve single-variable equations, Newtons method was first published in 1685 in A Treatise of Algebra both Historical and Practical by John Wallis. In 1690, Joseph Raphson published a description in Analysis aequationum universalis. Finally, in 1740, Thomas Simpson described Newtons method as a method for solving general nonlinear equations using calculus
26.
Lagrange polynomial
–
In numerical analysis, Lagrange polynomials are used for polynomial interpolation. For a given set of points x j and numbers y j. Although named after Joseph Louis Lagrange, who published it in 1795 and it is also an easy consequence of a formula published in 1783 by Leonhard Euler. Uses of Lagrange polynomials include the Newton–Cotes method of numerical integration, Lagrange interpolation is susceptible to Runges phenomenon of large oscillation. And changing the points x j requires recalculating the entire interpolant, note how, given the initial assumption that no two x i are the same, x j − x m ≠0, so this expression is always well-defined. On the other hand, if also y i = y j, then those two points would actually be one single point. It follows that y i ℓ i = y i, so at each point x i, L = y i +0 +0 + ⋯ +0 = y i, showing that L interpolates the function exactly. ℓ j = ∏ m =0, m ≠ j k x i − x m x j − x m We consider what happens when this product is expanded. Because the product skips m = j, if i = j then all terms are x j − x m x j − x m =1. So ℓ j = δ j i = {1, if j = i 0, so, L = ∑ j =0 k y j ℓ j = ∑ j =0 k y j δ j i = y i. Thus the function L is a polynomial with degree at most k, additionally, the interpolating polynomial is unique, as shown by the unisolvence theorem at the polynomial interpolation article. Solving an interpolation problem leads to a problem in linear algebra amounting to inversion of a matrix. Using a standard basis for our interpolation polynomial L = ∑ j =0 k x j m j. This construction is analogous to the Chinese Remainder Theorem, instead of checking for remainders of integers modulo prime numbers, we are checking for remainders of polynomials when divided by linears. We wish to interpolate ƒ = x2 over the range 1 ≤ x ≤3, the Lagrange form of the interpolation polynomial shows the linear character of polynomial interpolation and the uniqueness of the interpolation polynomial. Therefore, it is preferred in proofs and theoretical arguments, uniqueness can also be seen from the invertibility of the Vandermonde matrix, due to the non-vanishing of the Vandermonde determinant. But, as can be seen from the construction, each time a node xk changes, a better form of the interpolation polynomial for practical purposes is the barycentric form of the Lagrange interpolation or Newton polynomials. Lagrange and other interpolation at equally spaced points, as in the example above, yield a polynomial oscillating above and below the true function
27.
Euler method
–
In mathematics and computational science, the Euler method is a first-order numerical procedure for solving ordinary differential equations with a given initial value. It is the most basic method for numerical integration of ordinary differential equations and is the simplest Runge–Kutta method. The Euler method is named after Leonhard Euler, who treated it in his book Institutionum calculi integralis. The Euler method is a method, which means that the local error is proportional to the square of the step size. The Euler method often serves as the basis to more complex methods. Consider the problem of calculating the shape of a curve which starts at a given point. The idea is that while the curve is initially unknown, its starting point, then, from the differential equation, the slope to the curve at A0 can be computed, and so, the tangent line. Take a small step along that tangent line up to a point A1, along this small step, the slope does not change too much, so A1 will be close to the curve. If we pretend that A1 is still on the curve, after several steps, a polygonal curve A0 A1 A2 A3 … is computed. Choose a value h for the size of every step and set t n = t 0 + n h. Now, one step of the Euler method from t n to t n +1 = t n + h is y n +1 = y n + h f. The value of y n is an approximation of the solution to the ODE at time t n, y n ≈ y, the Euler method is explicit, i. e. the solution y n +1 is an explicit function of y i for i ≤ n. Given the initial value problem y ′ = y, y =1, the Euler method is y n +1 = y n + h f. so first we must compute f. In this simple equation, the function f is defined by f = y. By doing the above step, we have found the slope of the line that is tangent to the curve at the point. Recall that the slope is defined as the change in y divided by the change in t, the next step is to multiply the above value by the step size h, which we take equal to one here, h ⋅ f =1 ⋅1 =1. Since the step size is the change in t, when we multiply the step size and this value is then added to the initial y value to obtain the next value to be used for computations. Y0 + h f = y 1 =1 +1 ⋅1 =2, the above steps should be repeated to find y 2, y 3 and y 4
28.
NIST
–
The National Institute of Standards and Technology is a measurement standards laboratory, and a non-regulatory agency of the United States Department of Commerce. Its mission is to promote innovation and industrial competitiveness, in 1821, John Quincy Adams had declared Weights and measures may be ranked among the necessities of life to every individual of human society. From 1830 until 1901, the role of overseeing weights and measures was carried out by the Office of Standard Weights and Measures, president Theodore Roosevelt appointed Samuel W. Stratton as the first director. The budget for the first year of operation was $40,000, a laboratory site was constructed in Washington, DC, and instruments were acquired from the national physical laboratories of Europe. In addition to weights and measures, the Bureau developed instruments for electrical units, in 1905 a meeting was called that would be the first National Conference on Weights and Measures. Quality standards were developed for products including some types of clothing, automobile brake systems and headlamps, antifreeze, during World War I, the Bureau worked on multiple problems related to war production, even operating its own facility to produce optical glass when European supplies were cut off. Between the wars, Harry Diamond of the Bureau developed a blind approach radio aircraft landing system, in 1948, financed by the Air Force, the Bureau began design and construction of SEAC, the Standards Eastern Automatic Computer. The computer went into operation in May 1950 using a combination of vacuum tubes, about the same time the Standards Western Automatic Computer, was built at the Los Angeles office of the NBS and used for research there. A mobile version, DYSEAC, was built for the Signal Corps in 1954, due to a changing mission, the National Bureau of Standards became the National Institute of Standards and Technology in 1988. Following 9/11, NIST conducted the investigation into the collapse of the World Trade Center buildings. NIST had a budget for fiscal year 2007 of about $843.3 million. NISTs 2009 budget was $992 million, and it also received $610 million as part of the American Recovery, NIST employs about 2,900 scientists, engineers, technicians, and support and administrative personnel. About 1,800 NIST associates complement the staff, in addition, NIST partners with 1,400 manufacturing specialists and staff at nearly 350 affiliated centers around the country. NIST publishes the Handbook 44 that provides the Specifications, tolerances, the Congress of 1866 made use of the metric system in commerce a legally protected activity through the passage of Metric Act of 1866. NIST is headquartered in Gaithersburg, Maryland, and operates a facility in Boulder, nISTs activities are organized into laboratory programs and extramural programs. Effective October 1,2010, NIST was realigned by reducing the number of NIST laboratory units from ten to six, nISTs Boulder laboratories are best known for NIST‑F1, which houses an atomic clock. NIST‑F1 serves as the source of the official time. NIST also operates a neutron science user facility, the NIST Center for Neutron Research, the NCNR provides scientists access to a variety of neutron scattering instruments, which they use in many research fields
29.
Abramowitz and Stegun
–
Its full title is Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. The notation used in the Handbook is the de facto standard for much of applied mathematics today, at the time of its publication, the Handbook was an essential resource for practitioners. Nowadays, computer systems have replaced the function tables. The foreword discusses a meeting in 1954 in which it was agreed that the advent of high-speed computing equipment changed the task of table making but definitely did not remove the need for tables. Because the Handbook is the work of U. S. federal government employees acting in their official capacity, while there was only one edition of the work, it went through many print runs including a growing number of corrections. The ninth reprint edition by Dover Publications incorporates additional corrections on pages 18,79,80,82,408,450,786,825 and 934, as a side-note, the Dover paperback edition cover names the second editor Irene A. Segun instead of Stegun. This error is used to illustrate the human trait of looking in every place except the most obvious one. Unresolved errata, Michael Danos and Johann Rafelski edited the “Pocketbook of Mathematical Functions”, the references were removed as well. Most known errata were incorporated, the physical constants updated and the now-first chapter saw some slight enlargement compared to the second chapter. The numbering of formulas was kept for easier cross-reference, Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. New York, United States Department of Commerce, National Bureau of Standards, boisvert, Ronald F. Lozier, Daniel W. Handbook of Mathematical Functions. A Century of Excellence in Measurements Standards and Technology - A Chronicle of Selected NBS/NIST Publications 1901-2000, USA, U. S. Department of Commerce, National Institute of Standards and Technology / CRC Press. A high quality scan of the book, in PDF and TIFF formats, hosted at the University of Birmingham, UK The book in scanned format, another scanned version by ConvertIt. com Empanel version NIST Digital Library of Mathematical Functions, the digital companion to the Handbook
30.
Mechanical calculator
–
A mechanical calculator, or calculating machine, was a mechanical device used to perform automatically the basic operations of arithmetic. Most mechanical calculators were comparable in size to small computers and have been rendered obsolete by the advent of the electronic calculator. Surviving notes from Wilhelm Schickard in 1623 reveal that he designed and had built the earliest of the attempts at mechanizing calculation. A study of the surviving notes shows a machine that would have jammed after a few entries on the dial. Schickard abandoned his project in 1624 and never mentioned it again until his death years later in 1635. Two decades after Schickards supposedly failed attempt, in 1642, Blaise Pascal decisively solved these problems with his invention of the mechanical calculator. Co-opted into his fathers labour as tax collector in Rouen, Pascal designed the calculator to help in the amount of tedious arithmetic required. It was called Pascals Calculator or Pascaline, for forty years the arithmometer was the only type of mechanical calculator available for sale. The comptometer, introduced in 1887, was the first machine to use a keyboard which consisted of columns of nine keys for each digit, the Dalton adding machine, manufactured from 1902, was the first to have a 10 key keyboard. Electric motors were used on some mechanical calculators from 1901, the production of mechanical calculators came to a stop in the middle of the 1970s closing an industry that had lasted for 120 years. The first one was a mechanical calculator, his difference engine. In 1855, Georg Scheutz became the first of a handful of designers to succeed at building a smaller and simpler model of his difference engine, a crucial step was the adoption of a punched card system derived from the Jacquard loom making it infinitely programmable. The desire to economize time and mental effort in arithmetical computations and this instrument was probably invented by the Semitic races and later adopted in India, whence it spread westward throughout Europe and eastward to China and Japan. After the development of the abacus, no further advances were made until John Napier devised his numbering rods, or Napiers Bones, in 1617. The 17th century marked the beginning of the history of mechanical calculators, as it saw the invention of its first machines, including Pascals calculator, Blaise Pascal invented a mechanical calculator with a sophisticated carry mechanism in 1642. After three years of effort and 50 prototypes he introduced his calculator to the public and he built twenty of these machines in the following ten years. This machine could add and subtract two numbers directly and multiply and divide by repetition and this suggests that the carry mechanism would have proved itself in practice many times over. Pascals invention of the machine, just three hundred years ago, was made while he was a youth of nineteen
31.
Bisection method
–
The bisection method in mathematics is a root-finding method that repeatedly bisects an interval and then selects a subinterval in which a root must lie for further processing. It is a simple and robust method, but it is also relatively slow. Because of this, it is used to obtain a rough approximation to a solution which is then used as a starting point for more rapidly converging methods. The method is called the interval halving method, the binary search method. The method is applicable for numerically solving the equation f =0 for the variable x, where f is a continuous function defined on an interval. In this case a and b are said to bracket a root since, by the intermediate value theorem, at each step the method divides the interval in two by computing the midpoint c = /2 of the interval and the value of the function f at that point. Unless c is itself a root there are now two possibilities, either f and f have opposite signs and bracket a root, or f and f have opposite signs. The method selects the subinterval that is guaranteed to be a bracket as the new interval to be used in the next step, in this way an interval that contains a zero of f is reduced in width by 50% at each step. The process is continued until the interval is sufficiently small, explicitly, if f and f have opposite signs, then the method sets c as the new value for b, and if f and f have opposite signs then the method sets c as the new a. In both cases, the new f and f have opposite signs, so the method is applicable to this smaller interval, the input for the method is a continuous function f, an interval, and the function values f and f. The function values are of opposite sign, each iteration performs these steps, Calculate c, the midpoint of the interval, c = a + b/2. Calculate the function value at the midpoint, f, if convergence is satisfactory, return c and stop iterating. Examine the sign of f and replace either or with so that there is a crossing within the new interval. When implementing the method on a computer, there can be problems with finite precision, although f is continuous, finite precision may preclude a function value ever being zero. For f = x − π, there never be a finite representation of x that gives zero. Floating point representations also have limited precision, so at some point the midpoint of will be either a or b, first, two numbers a and b have to be found such that f and f have opposite signs. For the above function, a =1 and b =2 satisfy this criterion, because the function is continuous, there must be a root within the interval. Because f is negative, a =1 is replaced with a =1.5 for the next iteration to ensure that f and f have opposite signs
32.
Riemann sum
–
Specifically, the interval over which the function is to be integrated is divided into N equal subintervals of length h = / N. The rectangles are then drawn so that either their left or right corners, or the middle of their top line lies on the graph of the function, the formula for x n above gives x n for the Top-left corner approximation. As N gets larger, this gets more accurate. Note that this is regardless of which i ′ is used. For a function f which is differentiable, the approximation error in each section of the midpoint rule decays as the cube of the width of the rectangle. E i ≤ Δ324 f ″ for some ξ in, midpoint method for solving ordinary differential equations Trapezoidal rule Simpsons rule
33.
Integral
–
In mathematics, an integral assigns numbers to functions in a way that can describe displacement, area, volume, and other concepts that arise by combining infinitesimal data. Integration is one of the two operations of calculus, with its inverse, differentiation, being the other. The area above the x-axis adds to the total and that below the x-axis subtracts from the total, roughly speaking, the operation of integration is the reverse of differentiation. For this reason, the integral may also refer to the related notion of the antiderivative. In this case, it is called an integral and is written. The integrals discussed in this article are those termed definite integrals, a rigorous mathematical definition of the integral was given by Bernhard Riemann. It is based on a procedure which approximates the area of a curvilinear region by breaking the region into thin vertical slabs. A line integral is defined for functions of two or three variables, and the interval of integration is replaced by a curve connecting two points on the plane or in the space. In a surface integral, the curve is replaced by a piece of a surface in the three-dimensional space and this method was further developed and employed by Archimedes in the 3rd century BC and used to calculate areas for parabolas and an approximation to the area of a circle. A similar method was developed in China around the 3rd century AD by Liu Hui. This method was used in the 5th century by Chinese father-and-son mathematicians Zu Chongzhi. The next significant advances in integral calculus did not begin to appear until the 17th century, further steps were made in the early 17th century by Barrow and Torricelli, who provided the first hints of a connection between integration and differentiation. Barrow provided the first proof of the theorem of calculus. Wallis generalized Cavalieris method, computing integrals of x to a power, including negative powers. The major advance in integration came in the 17th century with the independent discovery of the theorem of calculus by Newton. The theorem demonstrates a connection between integration and differentiation and this connection, combined with the comparative ease of differentiation, can be exploited to calculate integrals. In particular, the theorem of calculus allows one to solve a much broader class of problems. Equal in importance is the mathematical framework that both Newton and Leibniz developed
34.
QR decomposition
–
Any real square matrix A may be decomposed as A = Q R, where Q is an orthogonal matrix and R is an upper triangular matrix. If A is invertible, then the factorization is unique if we require that the elements of R be positive. If instead A is a square matrix, then there is a decomposition A = QR where Q is a unitary matrix. If A has n linearly independent columns, then the first n columns of Q form a basis for the column space of A. More generally, the first k columns of Q form a basis for the span of the first k columns of A for any 1 ≤ k ≤ n. The fact that any column k of A only depends on the first k columns of Q is responsible for the form of R. If A is of rank n and we require that the diagonal elements of R1 are positive then R1 and Q1 are unique. R1 is then equal to the upper triangular factor of the Cholesky decomposition of A* A. Analogously, we can define QL, RQ, and LQ decompositions, with L being a lower triangular matrix. There are several methods for computing the QR decomposition, such as by means of the Gram–Schmidt process, Householder transformations. Each has a number of advantages and disadvantages, consider the Gram–Schmidt process applied to the columns of the full column rank matrix A =, with inner product ⟨ v, w ⟩ = v ⊤ w. This can be written in form, A = Q R where. Consider the decomposition of A =, recall that an orthonormal matrix Q has the property Q T Q = I. Then, we can calculate Q by means of Gram–Schmidt as follows, thus, we have Q T A = Q T Q R = R, R = Q T A =. The RQ decomposition transforms a matrix A into the product of a triangular matrix R. The only difference from QR decomposition is the order of these matrices, QR decomposition is Gram–Schmidt orthogonalization of columns of A, started from the first column. RQ decomposition is Gram–Schmidt orthogonalization of rows of A, started from the last row, the Gram-Schmidt process is inherently numerically unstable. While the application of the projections has an appealing geometric analogy to orthogonalisation, a significant advantage however is the ease of implementation, which makes this a useful algorithm to use for prototyping if a pre-built linear algebra library is unavailable. A Householder reflection is a transformation takes a vector and reflects it about some plane or hyperplane
35.
System of linear equations
–
In mathematics, a system of linear equations is a collection of two or more linear equations involving the same set of variables. A solution to a system is an assignment of values to the variables such that all the equations are simultaneously satisfied. A solution to the system above is given by x =1 y = −2 z = −2 since it all three equations valid. The word system indicates that the equations are to be considered collectively, in mathematics, the theory of linear systems is the basis and a fundamental part of linear algebra, a subject which is used in most parts of modern mathematics. Computational algorithms for finding the solutions are an important part of linear algebra, and play a prominent role in engineering, physics, chemistry, computer science. A system of equations can often be approximated by a linear system. For solutions in an integral domain like the ring of the integers, or in other structures, other theories have been developed. Integer linear programming is a collection of methods for finding the best integer solution, gröbner basis theory provides algorithms when coefficients and unknowns are polynomials. Also tropical geometry is an example of linear algebra in a more exotic structure, the simplest kind of linear system involves two equations and two variables,2 x +3 y =64 x +9 y =15. One method for solving such a system is as follows, first, solve the top equation for x in terms of y, x =3 −32 y. Now substitute this expression for x into the equation,4 +9 y =15. This results in an equation involving only the variable y. Solving gives y =1, and substituting this back into the equation for x yields x =3 /2. Here x 1, x 2, …, x n are the unknowns, a 11, a 12, …, a m n are the coefficients of the system, and b 1, b 2, …, b m are the constant terms. Often the coefficients and unknowns are real or complex numbers, but integers and rational numbers are seen, as are polynomials. One extremely helpful view is that each unknown is a weight for a vector in a linear combination. X1 + x 2 + ⋯ + x n = This allows all the language, If every vector within that span has exactly one expression as a linear combination of the given left-hand vectors, then any solution is unique. This is important because if we have m independent vectors a solution is guaranteed regardless of the right-hand side, and otherwise not guaranteed
36.
Simplex algorithm
–
In mathematical optimization, Dantzigs simplex algorithm is a popular algorithm for linear programming. The name of the algorithm is derived from the concept of a simplex and was suggested by T. S. Motzkin. Simplices are not actually used in the method, but one interpretation of it is that it operates on simplicial cones, the simplicial cones in question are the corners of a geometric object called a polytope. The shape of this polytope is defined by the applied to the objective function. There is a process to convert any linear program into one in standard form so this results in no loss of generality. In geometric terms, the region defined by all values of x such that A x ≤ b, x i ≥0 is a convex polytope. In this context such a point is known as a feasible solution. It can be shown that for a program in standard form. The simplex algorithm applies this insight by walking along edges of the polytope to extreme points with greater and greater objective values and this continues until the maximum value is reached or an unbounded edge is visited, concluding that the problem has no solution. The solution of a program is accomplished in two steps. In the first step, known as Phase I, an extreme point is found. Depending on the nature of the program this may be trivial, the possible results of Phase I are either that a basic feasible solution is found or that the feasible region is empty. In the latter case the program is called infeasible. In the second step, Phase II, the algorithm is applied using the basic feasible solution found in Phase I as a starting point. The possible results from Phase II are either a basic feasible solution or an infinite edge on which the objective function is unbounded below. George Dantzig worked on planning methods for the US Army Air Force during World War II using a desk calculator, during 1946 his colleague challenged him to mechanize the planning process in order to entice him into not taking another job. Dantzig formulated the problem as linear inequalities inspired by the work of Wassily Leontief, however, Dantzigs core insight was to realize that most such ground rules can be translated into a linear objective function that needs to be maximized. Development of the method was evolutionary and happened over a period of about a year
37.
Linear programming
–
Linear programming is a method to achieve the best outcome in a mathematical model whose requirements are represented by linear relationships. Linear programming is a case of mathematical programming. More formally, linear programming is a technique for the optimization of an objective function, subject to linear equality. Its feasible region is a polytope, which is a set defined as the intersection of finitely many half spaces. Its objective function is an affine function defined on this polyhedron. A linear programming algorithm finds a point in the polyhedron where this function has the smallest value if such a point exists, the expression to be maximized or minimized is called the objective function. The inequalities Ax ≤ b and x ≥0 are the constraints which specify a convex polytope over which the function is to be optimized. In this context, two vectors are comparable when they have the same dimensions, if every entry in the first is less-than or equal-to the corresponding entry in the second then we can say the first vector is less-than or equal-to the second vector. Linear programming can be applied to fields of study. It is widely used in business and economics, and is utilized for some engineering problems. Industries that use linear programming models include transportation, energy, telecommunications and it has proved useful in modeling diverse types of problems in planning, routing, scheduling, assignment, and design. The first linear programming formulation of a problem that is equivalent to the linear programming problem was given by Leonid Kantorovich in 1939. He developed it during World War II as a way to plan expenditures and returns so as to reduce costs to the army, about the same time as Kantorovich, the Dutch-American economist T. C. Koopmans formulated classical economic problems as linear programs, Kantorovich and Koopmans later shared the 1975 Nobel prize in economics. Dantzig independently developed general linear programming formulation to use for planning problems in US Air Force, in 1947, Dantzig also invented the simplex method that for the first time efficiently tackled the linear programming problem in most cases. Dantzig provided formal proof in an unpublished report A Theorem on Linear Inequalities on January 5,1948, postwar, many industries found its use in their daily planning. Dantzigs original example was to find the best assignment of 70 people to 70 jobs, the computing power required to test all the permutations to select the best assignment is vast, the number of possible configurations exceeds the number of particles in the observable universe. However, it only a moment to find the optimum solution by posing the problem as a linear program
38.
Floating-point arithmetic
–
In computing, floating-point arithmetic is arithmetic using formulaic representation of real numbers as an approximation so as to support a trade-off between range and precision. A number is, in general, represented approximately to a number of significant digits and scaled using an exponent in some fixed base. For example,1.2345 =12345 ⏟ significand ×10 ⏟ base −4 ⏞ exponent, the term floating point refers to the fact that a numbers radix point can float, that is, it can be placed anywhere relative to the significant digits of the number. This position is indicated as the exponent component, and thus the floating-point representation can be thought of as a kind of scientific notation. The result of dynamic range is that the numbers that can be represented are not uniformly spaced. Over the years, a variety of floating-point representations have been used in computers, however, since the 1990s, the most commonly encountered representation is that defined by the IEEE754 Standard. A floating-point unit is a part of a computer system designed to carry out operations on floating point numbers. A number representation specifies some way of encoding a number, usually as a string of digits, there are several mechanisms by which strings of digits can represent numbers. In common mathematical notation, the string can be of any length. If the radix point is not specified, then the string implicitly represents an integer, in fixed-point systems, a position in the string is specified for the radix point. So a fixed-point scheme might be to use a string of 8 decimal digits with the point in the middle. The scaling factor, as a power of ten, is then indicated separately at the end of the number, floating-point representation is similar in concept to scientific notation. Logically, a floating-point number consists of, A signed digit string of a length in a given base. This digit string is referred to as the significand, mantissa, the length of the significand determines the precision to which numbers can be represented. The radix point position is assumed always to be somewhere within the significand—often just after or just before the most significant digit and this article generally follows the convention that the radix point is set just after the most significant digit. A signed integer exponent, which modifies the magnitude of the number, using base-10 as an example, the number 7005152853504700000♠152853.5047, which has ten decimal digits of precision, is represented as the significand 1528535047 together with 5 as the exponent. In storing such a number, the base need not be stored, since it will be the same for the range of supported numbers. Symbolically, this value is, s b p −1 × b e, where s is the significand, p is the precision, b is the base
39.
Numerically stable
–
In the mathematical subfield of numerical analysis, numerical stability is a generally desirable property of numerical algorithms. The precise definition of stability depends on the context, one is numerical linear algebra and the other is algorithms for solving ordinary and partial differential equations by discrete approximation. In numerical linear algebra the principal concern is instabilities caused by proximity to singularities of various kinds, some numerical algorithms may damp out the small fluctuations in the input data, others might magnify such errors. Calculations that can be not to magnify approximation errors are called numerically stable. One of the tasks of numerical analysis is to try to select algorithms which are robust – that is to say. Typically, an algorithm involves an approximate method, and in some cases one could prove that the algorithm would approach the solution in some limit. There are different ways to formalize the concept of stability, the following definitions of forward, backward, and mixed stability are often used in numerical linear algebra. Consider the problem to be solved by the algorithm as a function f mapping the data x to the solution y. The result of the algorithm, say y*, will deviate from the true solution y. The main causes of error are round-off error and truncation error, the forward error of the algorithm is the difference between the result and the solution, in this case, Δy = y* − y. The backward error is the smallest Δx such that f = y*, in other words, the forward and backward error are related by the condition number, the forward error is at most as big in magnitude as the condition number multiplied by the magnitude of the backward error. In many cases, it is natural to consider the relative error | Δ x | | x | instead of the absolute error Δx. The algorithm is said to be stable if the backward error is small for all inputs x. Of course, small is a term and its definition will depend on the context. Often, we want the error to be of the order as, or perhaps only a few orders of magnitude bigger than. The usual definition of numerical stability uses a general concept, called mixed stability, which combines the forward error. An algorithm is stable in this if it solves a nearby problem approximately. Hence, a stable algorithm is always stable
40.
Iterative method
–
A specific implementation of an iterative method, including the termination criteria, is an algorithm of the iterative method. An iterative method is called convergent if the sequence converges for given initial approximations. A mathematically rigorous convergence analysis of a method is usually performed, however. In the problems of finding the root of an equation, an iterative method uses an initial guess to generate successive approximations to a solution, in contrast, direct methods attempt to solve the problem by a finite sequence of operations. In the absence of rounding errors, direct methods would deliver an exact solution, Iterative methods are often the only choice for nonlinear equations. Here xn is the nth approximation or iteration of x and xn+1 is the next or n +1 iteration of x, alternately, superscripts in parentheses are often used in numerical methods, so as not to interfere with subscripts with other meanings. If the function f is differentiable, a sufficient condition for convergence is that the spectral radius of the derivative is strictly bounded by one in a neighborhood of the fixed point. If this condition holds at the point, then a sufficiently small neighborhood must exist. In the case of a system of equations, the two main classes of iterative methods are the stationary iterative methods, and the more general Krylov subspace methods. While these methods are simple to derive, implement, and analyze, examples of stationary iterative methods are the Jacobi method, Gauss–Seidel method and the Successive over-relaxation method. Linear stationary iterative methods are also called relaxation methods, Krylov subspace methods work by forming a basis of the sequence of successive matrix powers times the initial residual. The approximations to the solution are then formed by minimizing the residual over the subspace formed, the prototypical method in this class is the conjugate gradient method. Other methods are the generalized minimal residual method and the biconjugate gradient method, since these methods form a basis, it is evident that the method converges in N iterations, where N is the system size. However, in the presence of rounding errors this statement does not hold, moreover, in practice N can be large. The analysis of methods is hard, depending on a complicated function of the spectrum of the operator. The construction of preconditioners is a research area. Probably the first iterative method for solving a linear system appeared in a letter of Gauss to a student of his and he proposed solving a 4-by-4 system of equations by repeatedly solving the component in which the residual was the largest. The theory of stationary iterative methods was solidly established with the work of D. M, only in the 1970s was it realized that conjugacy based methods work very well for partial differential equations, especially the elliptic type
41.
Limit of a sequence
–
In mathematics, the limit of a sequence is the value that the terms of a sequence tend to. If such a limit exists, the sequence is called convergent, a sequence which does not converge is said to be divergent. The limit of a sequence is said to be the fundamental notion on which the whole of analysis ultimately rests, limits can be defined in any metric or topological space, but are usually first encountered in the real numbers. The Greek philosopher Zeno of Elea is famous for formulating paradoxes that involve limiting processes, leucippus, Democritus, Antiphon, Eudoxus and Archimedes developed the method of exhaustion, which uses an infinite sequence of approximations to determine an area or a volume. Archimedes succeeded in summing what is now called a geometric series, Newton dealt with series in his works on Analysis with infinite series, Method of fluxions and infinite series and Tractatus de Quadratura Curvarum. In the latter work, Newton considers the binomial expansion of n which he then linearizes by taking limits, at the end of the century, Lagrange in his Théorie des fonctions analytiques opined that the lack of rigour precluded further development in calculus. Gauss in his etude of hypergeometric series for the first time rigorously investigated under which conditions a series converged to a limit, the modern definition of a limit was given by Bernhard Bolzano and by Karl Weierstrass in the 1870s. In the real numbers, a number L is the limit of the if the numbers in the sequence become closer and closer to L. If x n = c for some constant c, then x n → c, if x n =1 n, then x n →0. If x n =1 / n when n is even, given any real number, one may easily construct a sequence that converges to that number by taking decimal approximations. For example, the sequence 0.3,0.33,0.333,0.3333, note that the decimal representation 0.3333. is the limit of the previous sequence, defined by 0.3333. ≜ lim n → ∞ ∑ i =1 n 310 i, finding the limit of a sequence is not always obvious. Two examples are lim n → ∞ n and the Arithmetic–geometric mean, the squeeze theorem is often useful in such cases. In other words, for measure of closeness ϵ, the sequences terms are eventually that close to the limit. The sequence is said to converge to or tend to the limit x, symbolically, this is, ∀ ϵ >0 ∃ N ∈ R ∀ n ∈ N. If a sequence converges to some limit, then it is convergent, limits of sequences behave well with respect to the usual arithmetic operations. For any continuous function f, if x n → x then f → f, in fact, any real-valued function f is continuous if and only if it preserves the limits of sequences. Some other important properties of limits of sequences include the following