1.
Mathematics
–
Mathematics is the study of topics such as quantity, structure, space, and change. There is a range of views among mathematicians and philosophers as to the exact scope, Mathematicians seek out patterns and use them to formulate new conjectures. Mathematicians resolve the truth or falsity of conjectures by mathematical proof, when mathematical structures are good models of real phenomena, then mathematical reasoning can provide insight or predictions about nature. Through the use of abstraction and logic, mathematics developed from counting, calculation, measurement, practical mathematics has been a human activity from as far back as written records exist. The research required to solve mathematical problems can take years or even centuries of sustained inquiry, rigorous arguments first appeared in Greek mathematics, most notably in Euclids Elements. Galileo Galilei said, The universe cannot be read until we have learned the language and it is written in mathematical language, and the letters are triangles, circles and other geometrical figures, without which means it is humanly impossible to comprehend a single word. Without these, one is wandering about in a dark labyrinth, carl Friedrich Gauss referred to mathematics as the Queen of the Sciences. Benjamin Peirce called mathematics the science that draws necessary conclusions, David Hilbert said of mathematics, We are not speaking here of arbitrariness in any sense. Mathematics is not like a game whose tasks are determined by arbitrarily stipulated rules, rather, it is a conceptual system possessing internal necessity that can only be so and by no means otherwise. Albert Einstein stated that as far as the laws of mathematics refer to reality, they are not certain, Mathematics is essential in many fields, including natural science, engineering, medicine, finance and the social sciences. Applied mathematics has led to entirely new mathematical disciplines, such as statistics, Mathematicians also engage in pure mathematics, or mathematics for its own sake, without having any application in mind. There is no clear line separating pure and applied mathematics, the history of mathematics can be seen as an ever-increasing series of abstractions. The earliest uses of mathematics were in trading, land measurement, painting and weaving patterns, in Babylonian mathematics elementary arithmetic first appears in the archaeological record. Numeracy pre-dated writing and numeral systems have many and diverse. Between 600 and 300 BC the Ancient Greeks began a study of mathematics in its own right with Greek mathematics. Mathematics has since been extended, and there has been a fruitful interaction between mathematics and science, to the benefit of both. Mathematical discoveries continue to be made today, the overwhelming majority of works in this ocean contain new mathematical theorems and their proofs. The word máthēma is derived from μανθάνω, while the modern Greek equivalent is μαθαίνω, in Greece, the word for mathematics came to have the narrower and more technical meaning mathematical study even in Classical times
2.
Linear algebra
–
Linear algebra is the branch of mathematics concerning vector spaces and linear mappings between such spaces. It includes the study of lines, planes, and subspaces, the set of points with coordinates that satisfy a linear equation forms a hyperplane in an n-dimensional space. The conditions under which a set of n hyperplanes intersect in a point is an important focus of study in linear algebra. Such an investigation is initially motivated by a system of linear equations containing several unknowns, such equations are naturally represented using the formalism of matrices and vectors. Linear algebra is central to both pure and applied mathematics, for instance, abstract algebra arises by relaxing the axioms of a vector space, leading to a number of generalizations. Functional analysis studies the infinite-dimensional version of the theory of vector spaces, combined with calculus, linear algebra facilitates the solution of linear systems of differential equations. Because linear algebra is such a theory, nonlinear mathematical models are sometimes approximated by linear models. The study of linear algebra first emerged from the study of determinants, determinants were used by Leibniz in 1693, and subsequently, Gabriel Cramer devised Cramers Rule for solving linear systems in 1750. Later, Gauss further developed the theory of solving linear systems by using Gaussian elimination, the study of matrix algebra first emerged in England in the mid-1800s. In 1844 Hermann Grassmann published his Theory of Extension which included foundational new topics of what is called linear algebra. In 1848, James Joseph Sylvester introduced the term matrix, which is Latin for womb, while studying compositions of linear transformations, Arthur Cayley was led to define matrix multiplication and inverses. Crucially, Cayley used a letter to denote a matrix. In 1882, Hüseyin Tevfik Pasha wrote the book titled Linear Algebra, the first modern and more precise definition of a vector space was introduced by Peano in 1888, by 1900, a theory of linear transformations of finite-dimensional vector spaces had emerged. Linear algebra took its form in the first half of the twentieth century. The use of matrices in quantum mechanics, special relativity, the origin of many of these ideas is discussed in the articles on determinants and Gaussian elimination. Linear algebra first appeared in American graduate textbooks in the 1940s, following work by the School Mathematics Study Group, U. S. high schools asked 12th grade students to do matrix algebra, formerly reserved for college in the 1960s. In France during the 1960s, educators attempted to teach linear algebra through finite-dimensional vector spaces in the first year of secondary school and this was met with a backlash in the 1980s that removed linear algebra from the curriculum. To better suit 21st century applications, such as mining and uncertainty analysis
3.
Matrix (mathematics)
–
In mathematics, a matrix is a rectangular array of numbers, symbols, or expressions, arranged in rows and columns. For example, the dimensions of the matrix below are 2 ×3, the individual items in an m × n matrix A, often denoted by ai, j, where max i = m and max j = n, are called its elements or entries. Provided that they have the size, two matrices can be added or subtracted element by element. The rule for multiplication, however, is that two matrices can be multiplied only when the number of columns in the first equals the number of rows in the second. Any matrix can be multiplied element-wise by a scalar from its associated field, a major application of matrices is to represent linear transformations, that is, generalizations of linear functions such as f = 4x. The product of two matrices is a matrix that represents the composition of two linear transformations. Another application of matrices is in the solution of systems of linear equations, if the matrix is square, it is possible to deduce some of its properties by computing its determinant. For example, a matrix has an inverse if and only if its determinant is not zero. Insight into the geometry of a transformation is obtainable from the matrixs eigenvalues. Applications of matrices are found in most scientific fields, in computer graphics, they are used to manipulate 3D models and project them onto a 2-dimensional screen. Matrix calculus generalizes classical analytical notions such as derivatives and exponentials to higher dimensions, Matrices are used in economics to describe systems of economic relationships. A major branch of analysis is devoted to the development of efficient algorithms for matrix computations. Matrix decomposition methods simplify computations, both theoretically and practically, algorithms that are tailored to particular matrix structures, such as sparse matrices and near-diagonal matrices, expedite computations in finite element method and other computations. Infinite matrices occur in planetary theory and in atomic theory, a simple example of an infinite matrix is the matrix representing the derivative operator, which acts on the Taylor series of a function. A matrix is an array of numbers or other mathematical objects for which operations such as addition and multiplication are defined. Most commonly, a matrix over a field F is an array of scalars each of which is a member of F. Most of this focuses on real and complex matrices, that is, matrices whose elements are real numbers or complex numbers. More general types of entries are discussed below, for instance, this is a real matrix, A =
4.
Linear operator
–
In mathematics, a linear map is a mapping V → W between two modules that preserves the operations of addition and scalar multiplication. An important special case is when V = W, in case the map is called a linear operator, or an endomorphism of V. Sometimes the term linear function has the meaning as linear map. A linear map always maps linear subspaces onto linear subspaces, for instance it maps a plane through the origin to a plane, Linear maps can often be represented as matrices, and simple examples include rotation and reflection linear transformations. In the language of algebra, a linear map is a module homomorphism. In the language of category theory it is a morphism in the category of modules over a given ring, let V and W be vector spaces over the same field K. e. that for any vectors x1. Am ∈ K, the equality holds, f = a 1 f + ⋯ + a m f. It is then necessary to specify which of these fields is being used in the definition of linear. If V and W are considered as spaces over the field K as above, for example, the conjugation of complex numbers is an R-linear map C → C, but it is not C-linear. A linear map from V to K is called a linear functional and these statements generalize to any left-module RM over a ring R without modification, and to any right-module upon reversing of the scalar multiplication. The zero map between two left-modules over the ring is always linear. The identity map on any module is a linear operator, any homothecy centered in the origin of a vector space, v ↦ c v where c is a scalar, is a linear operator. This does not hold in general for modules, where such a map might only be semilinear, for real numbers, the map x ↦ x2 is not linear. Conversely, any map between finite-dimensional vector spaces can be represented in this manner, see the following section. Differentiation defines a map from the space of all differentiable functions to the space of all functions. It also defines an operator on the space of all smooth functions. If V and W are finite-dimensional vector spaces over a field F, then functions that send linear maps f, V → W to dimF × dimF matrices in the way described in the sequel are themselves linear maps. The expected value of a variable is linear, as for random variables X and Y we have E = E + E and E = aE
5.
Complex number
–
A complex number is a number that can be expressed in the form a + bi, where a and b are real numbers and i is the imaginary unit, satisfying the equation i2 = −1. In this expression, a is the part and b is the imaginary part of the complex number. If z = a + b i, then ℜ z = a, ℑ z = b, Complex numbers extend the concept of the one-dimensional number line to the two-dimensional complex plane by using the horizontal axis for the real part and the vertical axis for the imaginary part. The complex number a + bi can be identified with the point in the complex plane, a complex number whose real part is zero is said to be purely imaginary, whereas a complex number whose imaginary part is zero is a real number. In this way, the numbers are a field extension of the ordinary real numbers. As well as their use within mathematics, complex numbers have applications in many fields, including physics, chemistry, biology, economics, electrical engineering. The Italian mathematician Gerolamo Cardano is the first known to have introduced complex numbers and he called them fictitious during his attempts to find solutions to cubic equations in the 16th century. Complex numbers allow solutions to equations that have no solutions in real numbers. For example, the equation 2 = −9 has no real solution, Complex numbers provide a solution to this problem. The idea is to extend the real numbers with the unit i where i2 = −1. According to the theorem of algebra, all polynomial equations with real or complex coefficients in a single variable have a solution in complex numbers. A complex number is a number of the form a + bi, for example, −3.5 + 2i is a complex number. The real number a is called the part of the complex number a + bi. By this convention the imaginary part does not include the unit, hence b. The real part of a number z is denoted by Re or ℜ. For example, Re = −3.5 Im =2, hence, in terms of its real and imaginary parts, a complex number z is equal to Re + Im ⋅ i. This expression is known as the Cartesian form of z. A real number a can be regarded as a number a + 0i whose imaginary part is 0
6.
Absolute value
–
In mathematics, the absolute value or modulus |x| of a real number x is the non-negative value of x without regard to its sign. Namely, |x| = x for a x, |x| = −x for a negative x. For example, the value of 3 is 3. The absolute value of a number may be thought of as its distance from zero, generalisations of the absolute value for real numbers occur in a wide variety of mathematical settings. For example, a value is also defined for the complex numbers. The absolute value is related to the notions of magnitude, distance. The term absolute value has been used in this sense from at least 1806 in French and 1857 in English, the notation |x|, with a vertical bar on each side, was introduced by Karl Weierstrass in 1841. Other names for absolute value include numerical value and magnitude, in programming languages and computational software packages, the absolute value of x is generally represented by abs, or a similar expression. Thus, care must be taken to interpret vertical bars as an absolute value sign only when the argument is an object for which the notion of an absolute value is defined. For any real number x the value or modulus of x is denoted by |x| and is defined as | x | = { x, if x ≥0 − x. As can be seen from the definition, the absolute value of x is always either positive or zero. Indeed, the notion of a distance function in mathematics can be seen to be a generalisation of the absolute value of the difference. Since the square root notation without sign represents the square root. This identity is used as a definition of absolute value of real numbers. The absolute value has the four fundamental properties, The properties given by equations - are readily apparent from the definition. To see that equation holds, choose ε from so that ε ≥0, some additional useful properties are given below. These properties are either implied by or equivalent to the properties given by equations -, for example, Absolute value is used to define the absolute difference, the standard metric on the real numbers. Since the complex numbers are not ordered, the definition given above for the absolute value cannot be directly generalised for a complex number
7.
Positive number
–
In mathematics, the concept of sign originates from the property of every non-zero real number of being positive or negative. Zero itself is signless, although in some contexts it makes sense to consider a signed zero, along with its application to real numbers, change of sign is used throughout mathematics and physics to denote the additive inverse, even for quantities which are not real numbers. Also, the sign can indicate aspects of mathematical objects that resemble positivity and negativity. A real number is said to be if its value is greater than zero. The attribute of being positive or negative is called the sign of the number, zero itself is not considered to have a sign. Also, signs are not defined for complex numbers, although the argument generalizes it in some sense, in common numeral notation, the sign of a number is often denoted by placing a plus sign or a minus sign before the number. For example, +3 denotes positive three, and −3 denotes negative three, when no plus or minus sign is given, the default interpretation is that a number is positive. Because of this notation, as well as the definition of numbers through subtraction. In this context, it makes sense to write − = +3, any non-zero number can be changed to a positive one using the absolute value function. For example, the value of −3 and the absolute value of 3 are both equal to 3. In symbols, this would be written |−3| =3 and |3| =3, the number zero is neither positive nor negative, and therefore has no sign. In arithmetic, +0 and −0 both denote the same number 0, which is the inverse of itself. Note that this definition is culturally determined, in France and Belgium,0 is said to be both positive and negative. The positive resp. negative numbers without zero are said to be strictly positive resp, in some contexts, such as signed number representations in computing, it makes sense to consider signed versions of zero, with positive zero and negative zero being different numbers. One also sees +0 and −0 in calculus and mathematical analysis when evaluating one-sided limits and this notation refers to the behaviour of a function as the input variable approaches 0 from positive or negative values respectively, these behaviours are not necessarily the same. Because zero is positive nor negative, the following phrases are sometimes used to refer to the sign of an unknown number. A number is negative if it is less than zero, a number is non-negative if it is greater than or equal to zero. A number is non-positive if it is less than or equal to zero, thus a non-negative number is either positive or zero, while a non-positive number is either negative or zero
8.
Real number
–
In mathematics, a real number is a value that represents a quantity along a line. The adjective real in this context was introduced in the 17th century by René Descartes, the real numbers include all the rational numbers, such as the integer −5 and the fraction 4/3, and all the irrational numbers, such as √2. Included within the irrationals are the numbers, such as π. Real numbers can be thought of as points on a long line called the number line or real line. Any real number can be determined by a possibly infinite decimal representation, such as that of 8.632, the real line can be thought of as a part of the complex plane, and complex numbers include real numbers. These descriptions of the numbers are not sufficiently rigorous by the modern standards of pure mathematics. All these definitions satisfy the definition and are thus equivalent. The statement that there is no subset of the reals with cardinality greater than ℵ0. Simple fractions were used by the Egyptians around 1000 BC, the Vedic Sulba Sutras in, c.600 BC, around 500 BC, the Greek mathematicians led by Pythagoras realized the need for irrational numbers, in particular the irrationality of the square root of 2. Arabic mathematicians merged the concepts of number and magnitude into a general idea of real numbers. In the 16th century, Simon Stevin created the basis for modern decimal notation, in the 17th century, Descartes introduced the term real to describe roots of a polynomial, distinguishing them from imaginary ones. In the 18th and 19th centuries, there was work on irrational and transcendental numbers. Johann Heinrich Lambert gave the first flawed proof that π cannot be rational, Adrien-Marie Legendre completed the proof, Évariste Galois developed techniques for determining whether a given equation could be solved by radicals, which gave rise to the field of Galois theory. Charles Hermite first proved that e is transcendental, and Ferdinand von Lindemann, lindemanns proof was much simplified by Weierstrass, still further by David Hilbert, and has finally been made elementary by Adolf Hurwitz and Paul Gordan. The development of calculus in the 18th century used the set of real numbers without having defined them cleanly. The first rigorous definition was given by Georg Cantor in 1871, in 1874, he showed that the set of all real numbers is uncountably infinite but the set of all algebraic numbers is countably infinite. Contrary to widely held beliefs, his first method was not his famous diagonal argument, the real number system can be defined axiomatically up to an isomorphism, which is described hereafter. Another possibility is to start from some rigorous axiomatization of Euclidean geometry, from the structuralist point of view all these constructions are on equal footing
9.
Circle group
–
The circle group forms a subgroup of C×, the multiplicative group of all nonzero complex numbers. Since C× is abelian, it follows that T is as well, the circle group is also the group U of 1×1 unitary matrices, these act on the complex plane by rotation about the origin. The circle group can be parametrized by the angle θ of rotation by θ ↦ z = e i θ = cos θ + i sin θ and this is the exponential map for the circle group. The circle group plays a role in Pontryagin duality. The notation T for the circle group stems from the fact that, with the standard topology, more generally Tn is geometrically an n-torus. One way to think about the group is that it describes how to add angles. For example, the diagram illustrates how to add 150° to 270°, the answer should be 150° + 270° = 420°, but when thinking in terms of the circle group, we need to forget the fact that we have wrapped once around the circle. Therefore, we adjust our answer by 360° which gives 420° = 60°, another description is in terms of ordinary addition, where only numbers between 0 and 1 are allowed. To achieve this, we might need to throw away digits occurring before the decimal point. For example, when we work out 0.784 +0.925 +0.446, the answer should be 2.155, the circle group is more than just an abstract algebraic object. It has a natural topology when regarded as a subspace of the complex plane, since multiplication and inversion are continuous functions on C×, the circle group has the structure of a topological group. Moreover, since the circle is a closed subset of the complex plane. The circle is a 1-dimensional real manifold and multiplication and inversion are real-analytic maps on the circle and this gives the circle group the structure of a one-parameter group, an instance of a Lie group. In fact, up to isomorphism, it is the unique 1-dimensional compact, moreover, every n-dimensional compact, connected, abelian Lie group is isomorphic to Tn. The circle group shows up in a variety of forms in mathematics and we list some of the more common forms here. Specifically, we show that T ≅ U ≅ R / Z ≅ SO, note that the slash denotes here quotient group. The set of all 1×1 unitary matrices clearly coincides with the circle group, therefore, the circle group is canonically isomorphic to U, the first unitary group. The exponential function gives rise to a group homomorphism exp, R → T from the real numbers R to the circle group T via the map θ ↦ e i θ = cos θ + i sin θ
10.
Complex conjugate
–
In mathematics, the complex conjugate of a complex number is the number with equal real part and imaginary part equal in magnitude but opposite in sign. For example, the conjugate of 3 + 4i is 3 − 4i. In polar form, the conjugate of ρ e i ϕ is ρ e − i ϕ and this can be shown using Eulers formula. Complex conjugates are important for finding roots of polynomials, according to the complex conjugate root theorem, if a complex number is a root to a polynomial in one variable with real coefficients, so is its conjugate. The complex conjugate of a number z is written as z ¯ or z ∗. The first notation avoids confusion with the notation for the transpose of a matrix. The second is preferred in physics, where dagger is used for the conjugate transpose, If a complex number is represented as a 2×2 matrix, the notations are identical. In some texts, the conjugate of a previous known number is abbreviated as c. c. A significant property of the conjugate is that a complex number is equal to its complex conjugate if its imaginary part is zero. The conjugate of the conjugate of a number z is z. The ultimate relation is the method of choice to compute the inverse of a number if it is given in rectangular coordinates. Exp = exp ¯ log = log ¯ if z is non-zero If p is a polynomial with real coefficients, thus, non-real roots of real polynomials occur in complex conjugate pairs. In general, if ϕ is a function whose restriction to the real numbers is real-valued. The map σ = z ¯ from C to C is a homeomorphism and antilinear, even though it appears to be a well-behaved function, it is not holomorphic, it reverses orientation whereas holomorphic functions locally preserve orientation. It is bijective and compatible with the operations, and hence is a field automorphism. As it keeps the real numbers fixed, it is an element of the Galois group of the field extension C / R and this Galois group has only two elements, σ and the identity on C. Thus the only two field automorphisms of C that leave the real numbers fixed are the identity map and complex conjugation. Similarly, for a fixed complex unit u = exp, the equation z − z 0 z ¯ − z 0 ¯ = u determines the line through z 0 in the direction of u
11.
Determinant
–
In linear algebra, the determinant is a useful value that can be computed from the elements of a square matrix. The determinant of a matrix A is denoted det, detA and it can be viewed as the scaling factor of the transformation described by the matrix. In the case of a 2 ×2 matrix, the formula for the determinant. Each determinant of a 2 ×2 matrix in this equation is called a minor of the matrix A, the same sort of procedure can be used to find the determinant of a 4 ×4 matrix, the determinant of a 5 ×5 matrix, and so forth. The use of determinants in calculus includes the Jacobian determinant in the change of rule for integrals of functions of several variables. Determinants are also used to define the characteristic polynomial of a matrix, in analytical geometry, determinants express the signed n-dimensional volumes of n-dimensional parallelepipeds. Sometimes, determinants are used merely as a notation for expressions that would otherwise be unwieldy to write down. When the entries of the matrix are taken from a field, it can be proven that any matrix has an inverse if. There are various equivalent ways to define the determinant of a square matrix A, i. e. one with the number of rows. Another way to define the determinant is expressed in terms of the columns of the matrix and these properties mean that the determinant is an alternating multilinear function of the columns that maps the identity matrix to the underlying unit scalar. These suffice to uniquely calculate the determinant of any square matrix, provided the underlying scalars form a field, the definition below shows that such a function exists, and it can be shown to be unique. Assume A is a matrix with n rows and n columns. The entries can be numbers or expressions, the definition of the determinant depends only on the fact that they can be added and multiplied together in a commutative manner. The determinant of a 2 ×2 matrix is defined by | a b c d | = a d − b c. If the matrix entries are numbers, the matrix A can be used to represent two linear maps, one that maps the standard basis vectors to the rows of A. In either case, the images of the vectors form a parallelogram that represents the image of the unit square under the mapping. The parallelogram defined by the rows of the matrix is the one with vertices at. The absolute value of ad − bc is the area of the parallelogram, the absolute value of the determinant together with the sign becomes the oriented area of the parallelogram
12.
Singular value decomposition
–
In linear algebra, the singular value decomposition is a factorization of a real or complex matrix. It is the generalization of the eigendecomposition of a positive semidefinite normal matrix to any m × n matrix via an extension of polar decomposition and it has many useful applications in signal processing and statistics. The diagonal entries σ i of Σ are known as the values of M. The columns of U and the columns of V are called the left-singular vectors and right-singular vectors of M, the singular value decomposition can be computed using the following observations, The left-singular vectors of M are a set of orthonormal eigenvectors of MM∗. The right-singular vectors of M are a set of eigenvectors of M∗M. The non-zero singular values of M are the roots of the non-zero eigenvalues of both M∗M and MM∗. Suppose M is a m × n matrix whose entries come from the field K, V∗ is the conjugate transpose of the n × n unitary matrix, V, thus also unitary. The diagonal entries σi of Σ are known as the values of M. A common convention is to list the singular values in descending order, in this case, the diagonal matrix, Σ, is uniquely determined by M. Thus the expression UΣV∗ can be interpreted as a composition of three geometrical transformations, a rotation or reflection, a scaling, and another rotation or reflection. For instance, the figure above explains how a matrix can be described as such a sequence. If the rotation is done first, M = PR, then R is the same and P = UΣU∗ has the same eigenvalues and this shows that the SVD is a generalization of the eigenvalue decomposition of pure stretches in orthogonal directions to arbitrary matrices which both stretch and rotate. As shown in the figure, the values can be interpreted as the semiaxes of an ellipse in 2D. This concept can be generalized to n-dimensional Euclidean space, with the values of any n × n square matrix being viewed as the semiaxes of an n-dimensional ellipsoid. Since U and V∗ are unitary, the columns of each of them form a set of orthonormal vectors, the matrix M maps the basis vector Vi to the stretched unit vector σi Ui. By the definition of a matrix, the same is true for their conjugate transposes U∗ and V. In short, the columns of U, U∗, V, and V∗ are orthonormal bases. Consider the 4 ×5 matrix M = A singular value decomposition of this matrix is given by UΣV∗ U = Σ = V ∗ = Notice Σ is zero outside of the diagonal and one diagonal element is zero
13.
Hilbert space
–
The mathematical concept of a Hilbert space, named after David Hilbert, generalizes the notion of Euclidean space. It extends the methods of algebra and calculus from the two-dimensional Euclidean plane. A Hilbert space is a vector space possessing the structure of an inner product that allows length. Furthermore, Hilbert spaces are complete, there are limits in the space to allow the techniques of calculus to be used. Hilbert spaces arise naturally and frequently in mathematics and physics, typically as infinite-dimensional function spaces, the earliest Hilbert spaces were studied from this point of view in the first decade of the 20th century by David Hilbert, Erhard Schmidt, and Frigyes Riesz. They are indispensable tools in the theories of partial differential equations, quantum mechanics, Fourier analysis —and ergodic theory, john von Neumann coined the term Hilbert space for the abstract concept that underlies many of these diverse applications. The success of Hilbert space methods ushered in a very fruitful era for functional analysis, geometric intuition plays an important role in many aspects of Hilbert space theory. Exact analogs of the Pythagorean theorem and parallelogram law hold in a Hilbert space, at a deeper level, perpendicular projection onto a subspace plays a significant role in optimization problems and other aspects of the theory. An element of a Hilbert space can be specified by its coordinates with respect to a set of coordinate axes. When that set of axes is countably infinite, this means that the Hilbert space can also usefully be thought of in terms of the space of sequences that are square-summable. The latter space is often in the literature referred to as the Hilbert space. One of the most familiar examples of a Hilbert space is the Euclidean space consisting of vectors, denoted by ℝ3. The dot product takes two vectors x and y, and produces a real number x·y, If x and y are represented in Cartesian coordinates, then the dot product is defined by ⋅ = x 1 y 1 + x 2 y 2 + x 3 y 3. The dot product satisfies the properties, It is symmetric in x and y, x · y = y · x. It is linear in its first argument, · y = ax1 · y + bx2 · y for any scalars a, b, and vectors x1, x2, and y. It is positive definite, for all x, x · x ≥0, with equality if. An operation on pairs of vectors that, like the dot product, a vector space equipped with such an inner product is known as a inner product space. Every finite-dimensional inner product space is also a Hilbert space, multivariable calculus in Euclidean space relies on the ability to compute limits, and to have useful criteria for concluding that limits exist
14.
Quaternion
–
In mathematics, the quaternions are a number system that extends the complex numbers. They were first described by Irish mathematician William Rowan Hamilton in 1843, a feature of quaternions is that multiplication of two quaternions is noncommutative. Hamilton defined a quaternion as the quotient of two directed lines in a space or equivalently as the quotient of two vectors. Quaternions are generally represented in the form, a + bi + cj + dk where a, b, c, and d are real numbers, and i, j, and k are the fundamental quaternion units. In practical applications, they can be used other methods, such as Euler angles and rotation matrices, or as an alternative to them. In modern mathematical language, quaternions form a four-dimensional associative normed division algebra over the real numbers, in fact, the quaternions were the first noncommutative division algebra to be discovered. The algebra of quaternions is often denoted by H, or in blackboard bold by H and it can also be given by the Clifford algebra classifications Cℓ0,2 ≅ Cℓ03,0. These rings are also Euclidean Hurwitz algebras, of which quaternions are the largest associative algebra. The unit quaternions can be thought of as a choice of a structure on the 3-sphere S3 that gives the group Spin. Quaternion algebra was introduced by Hamilton in 1843, carl Friedrich Gauss had also discovered quaternions in 1819, but this work was not published until 1900. Hamilton knew that the numbers could be interpreted as points in a plane. Points in space can be represented by their coordinates, which are triples of numbers, however, Hamilton had been stuck on the problem of multiplication and division for a long time. He could not figure out how to calculate the quotient of the coordinates of two points in space. The great breakthrough in quaternions finally came on Monday 16 October 1843 in Dublin, as he walked along the towpath of the Royal Canal with his wife, the concepts behind quaternions were taking shape in his mind. When the answer dawned on him, Hamilton could not resist the urge to carve the formula for the quaternions, i2 = j2 = k2 = ijk = −1, into the stone of Brougham Bridge as he paused on it. On the following day, Hamilton wrote a letter to his friend and fellow mathematician, John T. Graves and this letter was later published in the London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, vol. xxv, pp 489–95. In the letter, Hamilton states, And here there dawned on me the notion that we must admit, in some sense, an electric circuit seemed to close, and a spark flashed forth. Hamilton called a quadruple with these rules of multiplication a quaternion, Hamiltons treatment is more geometric than the modern approach, which emphasizes quaternions algebraic properties
15.
Versor
–
Versors are an algebraic parametrisation of rotations. In classical quaternion theory a versor is a quaternion of norm one. Each versor has the form q = exp = cos a + r sin a, r 2 = −1, a ∈, in case a = π/2, the versor is termed a right versor. The corresponding 3-dimensional rotation has the angle 2a about the axis r in axis–angle representation, the word is derived from Latin versare = to turn with the suffix -or forming a noun from the verb. It was introduced by William Rowan Hamilton in the context of his quaternion theory, for historical reasons, it sometimes is used synonymously with a unit quaternion without a reference to rotations. In the quaternion algebra a versor q = exp will rotate any quaternion v through the product map v ↦ q v q −1 such that the scalar part of v is preserved. If this scalar part is zero, i. e. v is a Euclidean vector in three dimensions, then the formula above defines the rotation through the angle 2a around the vector r. In other words, qvq−1 rotates the vector part of v around the vector r, see quaternions and spatial rotation for details. A quaternionic versor expressed in the complex 2×2 matrix representation is an element of the unitary group SU. Spin and SU are the same group, angles of rotation in this λ = 1/2 representation are equal to a, there is no 2 factor in angles unlike the λ =1 adjoint representation mentioned above, see representation theory of SU for details. For a fixed r, versors of the form exp where a ∈ (−π, π], in 2003 David W. Lyons wrote the fibers of the Hopf map are circles in S3. Lyons gives an introduction to quaternions to elucidate the Hopf fibration as a mapping on unit quaternions. Hamilton denoted the versor of a quaternion q by the symbol Uq and he was then able to display the general quaternion in polar coordinate form q = Tq Uq, where Tq is the norm of q. The norm of a versor is always equal to one, hence they occupy the unit 3-sphere in H, examples of versors include the eight elements of the quaternion group. Of particular importance are the right versors, which have angle π/2 and these versors have zero scalar part, and so are vectors of length one. The right versors form a sphere of square roots of −1 in the quaternion algebra, the generators i, j, and k are examples of right versors, as well as their additive inverses. Other versors include the twenty-four Hurwitz quaternions that have the norm 1, Hamilton defined a quaternion as the quotient of two vectors. A versor can be defined as the quotient of two unit vectors, for any fixed plane Π the quotient of two unit vectors lying in Π depends only on the angle between them, the same a as in the unit vector–angle representation of a versor explained above
16.
3-sphere
–
In mathematics, a 3-sphere is a higher-dimensional analogue of a sphere. It consists of the set of points equidistant from a central point in 4-dimensional Euclidean space. A 3-sphere is an example of a 3-manifold, in coordinates, a 3-sphere with center and radius r is the set of all points in real, 4-dimensional space such that ∑ i =032 =2 +2 +2 +2 = r 2. The 3-sphere centered at the origin with radius 1 is called the unit 3-sphere and is usually denoted S3 and it is often convenient to regard R4 as the space with 2 complex dimensions or the quaternions. The unit 3-sphere is then given by S3 = or S3 = and this description as the quaternions of norm one, identifies the 3-sphere with the versors in the quaternion division ring. Just as the circle is important for planar polar coordinates. See polar decomposition of a quaternion for details of development of the three-sphere. This view of the 3-sphere is the basis for the study of space as developed by Georges Lemaître. The 3-dimensional cubic hyperarea of a 3-sphere of radius r is 2 π2 r 3 while the 4-dimensional quartic hypervolume is 12 π2 r 4, every non-empty intersection of a 3-sphere with a three-dimensional hyperplane is a 2-sphere. Then the 2-sphere shrinks again down to a point as the 3-sphere leaves the hyperplane. A 3-sphere is a compact, connected, 3-dimensional manifold without boundary, what this means, in the broad sense, is that any loop, or circular path, on the 3-sphere can be continuously shrunk to a point without leaving the 3-sphere. The Poincaré conjecture, proved in 2003 by Grigori Perelman, provides that the 3-sphere is the only three-dimensional manifold with these properties, the 3-sphere is homeomorphic to the one-point compactification of R3. In general, any space that is homeomorphic to the 3-sphere is called a topological 3-sphere. The homology groups of the 3-sphere are as follows, H0, any topological space with these homology groups is known as a homology 3-sphere. Initially Poincaré conjectured that all homology 3-spheres are homeomorphic to S3, infinitely many homology spheres are now known to exist. For example, a Dehn filling with slope 1/n on any knot in the 3-sphere gives a homology sphere, as to the homotopy groups, we have π1 = π2 = and π3 is infinite cyclic. The higher-homotopy groups are all finite abelian but otherwise follow no discernible pattern, for more discussion see homotopy groups of spheres. The 3-sphere is naturally a smooth manifold, in fact, an embedded submanifold of R4
17.
Norm (mathematics)
–
A seminorm, on the other hand, is allowed to assign zero length to some non-zero vectors. A norm must also satisfy certain properties pertaining to scalability and additivity which are given in the definition below. A simple example is the 2-dimensional Euclidean space R2 equipped with the Euclidean norm, elements in this vector space are usually drawn as arrows in a 2-dimensional cartesian coordinate system starting at the origin. The Euclidean norm assigns to each vector the length of its arrow, because of this, the Euclidean norm is often known as the magnitude. A vector space on which a norm is defined is called a vector space. Similarly, a space with a seminorm is called a seminormed vector space. It is often possible to supply a norm for a vector space in more than one way. If p =0 then v is the zero vector, by the first axiom, absolute homogeneity, we have p =0 and p = p, so that by the triangle inequality p ≥0. A seminorm on V is a p, V → R with the properties 1. and 2. Every vector space V with seminorm p induces a normed space V/W, called the quotient space, the induced norm on V/W is clearly well-defined and is given by, p = p. A topological vector space is called if the topology of the space can be induced by a norm. If a norm p, V → R is given on a vector space V then the norm of a vector v ∈ V is usually denoted by enclosing it within double vertical lines, such notation is also sometimes used if p is only a seminorm. For the length of a vector in Euclidean space, the notation | v | with single vertical lines is also widespread, in Unicode, the codepoint of the double vertical line character ‖ is U+2016. The double vertical line should not be confused with the parallel to symbol and this is usually not a problem because the former is used in parenthesis-like fashion, whereas the latter is used as an infix operator. The double vertical line used here should not be confused with the symbol used to denote lateral clicks. The single vertical line | is called vertical line in Unicode, the trivial seminorm has p =0 for all x in V. Every linear form f on a vector space defines a seminorm by x → | f |, the absolute value ∥ x ∥ = | x | is a norm on the one-dimensional vector spaces formed by the real or complex numbers. The absolute value norm is a case of the L1 norm
18.
Cartesian plane
–
Each reference line is called a coordinate axis or just axis of the system, and the point where they meet is its origin, usually at ordered pair. The coordinates can also be defined as the positions of the projections of the point onto the two axis, expressed as signed distances from the origin. One can use the principle to specify the position of any point in three-dimensional space by three Cartesian coordinates, its signed distances to three mutually perpendicular planes. In general, n Cartesian coordinates specify the point in an n-dimensional Euclidean space for any dimension n and these coordinates are equal, up to sign, to distances from the point to n mutually perpendicular hyperplanes. The invention of Cartesian coordinates in the 17th century by René Descartes revolutionized mathematics by providing the first systematic link between Euclidean geometry and algebra. Using the Cartesian coordinate system, geometric shapes can be described by Cartesian equations, algebraic equations involving the coordinates of the points lying on the shape. For example, a circle of radius 2, centered at the origin of the plane, a familiar example is the concept of the graph of a function. Cartesian coordinates are also tools for most applied disciplines that deal with geometry, including astronomy, physics, engineering. They are the most common system used in computer graphics, computer-aided geometric design. Nicole Oresme, a French cleric and friend of the Dauphin of the 14th Century, used similar to Cartesian coordinates well before the time of Descartes. The adjective Cartesian refers to the French mathematician and philosopher René Descartes who published this idea in 1637 and it was independently discovered by Pierre de Fermat, who also worked in three dimensions, although Fermat did not publish the discovery. Both authors used a single axis in their treatments and have a length measured in reference to this axis. The concept of using a pair of axes was introduced later, after Descartes La Géométrie was translated into Latin in 1649 by Frans van Schooten and these commentators introduced several concepts while trying to clarify the ideas contained in Descartes work. Many other coordinate systems have developed since Descartes, such as the polar coordinates for the plane. The development of the Cartesian coordinate system would play a role in the development of the Calculus by Isaac Newton. The two-coordinate description of the plane was later generalized into the concept of vector spaces. Choosing a Cartesian coordinate system for a one-dimensional space – that is, for a straight line—involves choosing a point O of the line, a unit of length, and an orientation for the line. An orientation chooses which of the two half-lines determined by O is the positive, and which is negative, we say that the line is oriented from the negative half towards the positive half
19.
Ring (mathematics)
–
In mathematics, a ring is one of the fundamental algebraic structures used in abstract algebra. It consists of a set equipped with two operations that generalize the arithmetic operations of addition and multiplication. Through this generalization, theorems from arithmetic are extended to non-numerical objects such as polynomials, series, matrices, the conceptualization of rings started in the 1870s and completed in the 1920s. Key contributors include Dedekind, Hilbert, Fraenkel, and Noether, rings were first formalized as a generalization of Dedekind domains that occur in number theory, and of polynomial rings and rings of invariants that occur in algebraic geometry and invariant theory. Afterward, they proved to be useful in other branches of mathematics such as geometry. A ring is a group with a second binary operation that is associative, is distributive over the abelian group operation. By extension from the integers, the group operation is called addition. Whether a ring is commutative or not has profound implications on its behavior as an abstract object, as a result, commutative ring theory, commonly known as commutative algebra, is a key topic in ring theory. Its development has greatly influenced by problems and ideas occurring naturally in algebraic number theory. The most familiar example of a ring is the set of all integers, Z, −5, −4, −3, −2, −1,0,1,2,3,4,5. The familiar properties for addition and multiplication of integers serve as a model for the axioms for rings, a ring is a set R equipped with two binary operations + and · satisfying the following three sets of axioms, called the ring axioms 1. R is a group under addition, meaning that, + c = a + for all a, b, c in R. a + b = b + a for all a, b in R. There is an element 0 in R such that a +0 = a for all a in R, for each a in R there exists −a in R such that a + =0. R is a monoid under multiplication, meaning that, · c = a · for all a, b, c in R. There is an element 1 in R such that a ·1 = a and 1 · a = a for all a in R.3. Multiplication is distributive with respect to addition, a ⋅ = + for all a, b, c in R. · a = + for all a, b, c in R. As explained in § History below, many follow a alternative convention in which a ring is not defined to have a multiplicative identity. This article adopts the convention that, unless stated, a ring is assumed to have such an identity
20.
Slope
–
In mathematics, the slope or gradient of a line is a number that describes both the direction and the steepness of the line. The direction of a line is increasing, decreasing, horizontal or vertical. A line is increasing if it goes up from left to right, the slope is positive, i. e. m >0. A line is decreasing if it goes down from left to right, the slope is negative, i. e. m <0. If a line is horizontal the slope is zero, if a line is vertical the slope is undefined. The steepness, incline, or grade of a line is measured by the value of the slope. A slope with an absolute value indicates a steeper line Slope is calculated by finding the ratio of the vertical change to the horizontal change between two distinct points on a line. Sometimes the ratio is expressed as a quotient, giving the number for every two distinct points on the same line. A line that is decreasing has a negative rise, the line may be practical - as set by a road surveyor, or in a diagram that models a road or a roof either as a description or as a plan. The rise of a road between two points is the difference between the altitude of the road at two points, say y1 and y2, or in other words, the rise is = Δy. Here the slope of the road between the two points is described as the ratio of the altitude change to the horizontal distance between any two points on the line. In mathematical language, the m of the line is m = y 2 − y 1 x 2 − x 1. The concept of slope applies directly to grades or gradients in geography, as a generalization of this practical description, the mathematics of differential calculus defines the slope of a curve at a point as the slope of the tangent line at that point. When the curve given by a series of points in a diagram or in a list of the coordinates of points, thereby, the simple idea of slope becomes one of the main basis of the modern world in terms of both technology and the built environment. This is described by the equation, m = Δ y Δ x = vertical change horizontal change = rise run. Given two points and, the change in x from one to the other is x2 − x1, substituting both quantities into the above equation generates the formula, m = y 2 − y 1 x 2 − x 1. The formula fails for a line, parallel to the y axis. Suppose a line runs through two points, P = and Q =, since the slope is positive, the direction of the line is increasing
21.
Unit hyperbola
–
In geometry, the unit hyperbola is the set of points in the Cartesian plane that satisfies x 2 − y 2 =1. In the study of orthogonal groups, the unit hyperbola forms the basis for an alternative radial length r = x 2 − y 2. Whereas the unit circle surrounds its center, the unit hyperbola requires the conjugate hyperbola y 2 − x 2 =1 to complement it in the plane and this pair of hyperbolas share the asymptotes y = x and y = −x. When the conjugate of the hyperbola is in use, the alternative radial length is r = y 2 − x 2. The unit hyperbola is a case of the rectangular hyperbola, with a particular orientation, location. As such, its eccentricity equals 2, the unit hyperbola finds applications where the circle must be replaced with the hyperbola for purposes of analytic geometry. A prominent instance is the depiction of spacetime as a pseudo-Euclidean space, there the asymptotes of the unit hyperbola form a light cone. Further, the attention to areas of hyperbolic sectors by Gregoire de Saint-Vincent led to the logarithm function, generally asymptotic lines to a curve are said to converge toward the curve. In algebraic geometry and the theory of algebraic curves there is a different approach to asymptotes, the curve is first interpreted in the projective plane using homogeneous coordinates. Then the asymptotes are lines that are tangent to the curve at a point at infinity, thus circumventing any need for a distance concept. In a common framework are homogeneous coordinates with the line at infinity determined by the equation z =0. Both P, Q are simple on F, with tangents x + y =0, x − y =0, the Minkowski diagram is drawn in a spacetime plane where the spatial aspect has been restricted to a single dimension. The units of distance and time on such a plane are units of 30 centimetres length and nanoseconds, or astronomical units and intervals of 8 minutes and 20 seconds, or light years and years. Each of these scales of coordinates results in photon connections of events along diagonal lines of slope plus or minus one, the plane with the axes refers to a resting frame of reference. The diameter of the unit hyperbola represents a frame of reference in motion with rapidity a where tanh a = y/x and is the endpoint of the diameter on the unit hyperbola, the conjugate diameter represents the spatial hyperplane of simultaneity corresponding to rapidity a. Space is represented by planes perpendicular to the time axis, the here and now is a singularity in the middle. The vertical time axis convention stems from Minkowski in 1908, and is illustrated on page 48 of Eddingtons The Nature of the Physical World. A direct way to parameterizing the unit hyperbola starts with the hyperbola xy =1 parameterized with the exponential function and this hyperbola is transformed into the unit hyperbola by a linear mapping having the matrix A =12, A = =
22.
Hyperbolic angle
–
In mathematics, a hyperbolic angle is a geometric figure that divides a hyperbola. The science of hyperbolic angle parallels the relation of an angle to a circle. The hyperbolic angle is first defined for a position. A hyperbolic angle in standard position is the angle at between the ray to and the ray to where x >1, the magnitude of the hyperbolic angle is the area of the corresponding hyperbolic sector which is ln x. Note that unlike circular angle, hyperbolic angle is unbounded, as is the function ln x, the hyperbolic angle in standard position is considered to be negative when 0 < x <1. Suppose ab =1 and cd =1 with c > a >1 so that, then the squeeze mapping with diagonal elements b and a maps this interval to the standard position hyperbolic angle that runs from to. Thus this parameter becomes one of the most useful in the calculus of a real variable, a unit circle x 2 + y 2 =1 has a circular sector with an area half of the circular angle in radians. Analogously, a hyperbola x 2 − y 2 =1 has a hyperbolic sector with an area half of the hyperbolic angle. There is also a resolution between circular and hyperbolic cases, both curves are conic sections, and hence are treated as projective ranges in projective geometry. Given an origin point on one of these ranges, other points correspond to angles, the same construction can also be applied to the hyperbola. If P0 is taken to be the point, P1 the point and it thus makes sense to define the hyperbolic angle from P0 to an arbitrary point on the curve as a logarithmic function of the points value of x. In Euclidean space, the multiple of a given angle traces equal distances around a circle while it traces exponential distances upon the hyperbolic line, both circular and hyperbolic angle provide instances of an invariant measure. Arcs with a magnitude on a circle generate a measure on certain measurable sets on the circle whose magnitude does not vary as the circle turns or rotates. For the hyperbola the turning is by squeeze mapping, and the angle magnitudes stay the same when the plane is squeezed by a mapping ↦. The quadrature of the hyperbola is the evaluation of the area of a hyperbolic sector and it can be shown to be equal to the corresponding area against an asymptote. The quadrature was first accomplished by Gregoire de Saint-Vincent in 1647 in his momentous Opus geometricum quadrature circuli et sectionum coni. As expressed by a historian, quadrature of a hyperbola to its asymptotes, a. A. de Sarasa interpreted the quadrature as a logarithm and thus the geometrically defined natural logarithm is understood as the area under y = 1/x to the right of x =1. As an example of a function, the logarithm is more familiar than its motivator
23.
Split-complex number
–
In abstract algebra, the split-complex numbers are a two-dimensional commutative algebra over the real numbers different from the complex numbers. Every split-complex number has the form x + y j, where x and y are real numbers, the number j is similar to the imaginary unit i, except that j 2 = +1. As an algebra over the reals, the numbers are the same as the direct sum of algebras R ⊕ R under the isomorphism sending x + y j to. The name split comes from this characterization, as a real algebra and it arises, for example, as the real subalgebra generated by an involutory matrix. Geometrically, split-complex numbers are related to the modulus in the way that complex numbers are related to the square of the Euclidean norm. Unlike the complex numbers, the split-complex numbers contain nontrivial idempotents, as well as zero divisors, in interval analysis, a split complex number x + y j represents an interval with midpoint x and radius y. Another application involves using numbers, dual numbers, and ordinary complex numbers. Split-complex numbers have many names, see the synonyms section below. See the article Motor variable for functions of a split-complex number and it is this sign change which distinguishes the split-complex numbers from the ordinary complex ones. The quantity j here is not a number but an independent quantity. The collection of all such z is called the split-complex plane, addition and multiplication of split-complex numbers are defined by + = + j = + j. This multiplication is commutative, associative and distributes over addition, just as for complex numbers, one can define the notion of a split-complex conjugate. If z = x + j y the conjugate of z is defined as z ∗ = x − j y, the conjugate satisfies similar properties to usual complex conjugate. Namely, ∗ = z ∗ + w ∗ ∗ = z ∗ w ∗ ∗ = z and these three properties imply that the split-complex conjugate is an automorphism of order 2. The modulus of a number z = x + j y is given by the isotropic quadratic form ∥ z ∥ = z z ∗ = z ∗ z = x 2 − y 2. It has the composition algebra property, ∥ z w ∥ = ∥ z ∥ ∥ w ∥, however, this quadratic form is not positive-definite but rather has signature, so the modulus is not a norm. The associated bilinear form is given by ⟨ z, w ⟩ = Re = Re = x u − y v, another expression for the modulus is then ∥ z ∥ = ⟨ z, z ⟩. Since it is not positive-definite, this form is not an inner product
24.
Klein four-group
–
In mathematics, the Klein four-group is the group Z2 × Z2, the direct product of two copies of the cyclic group of order 2. It was named Vierergruppe by Felix Klein in 1884, with four elements, the Klein four-group is the smallest non-cyclic group, and the cyclic group of order 4 and the Klein four-group are, up to isomorphism, the only groups of order 4. The smallest non-abelian group is the group of degree 3. The Klein groups Cayley table is given by, The Klein four-group is also defined by the group presentation V = ⟨ a, b ∣ a 2 = b 2 =2 = e ⟩. All non-identity elements of the Klein group have order 2, thus any two non-identity elements can serve as generators in the above presentation, the Klein four-group is the smallest non-cyclic group. It is however a group, and isomorphic to the dihedral group of order 4, Dih2, other than the group of order 2. The Klein four-group is also isomorphic to the direct sum Z2 ⊕ Z2, so that it can be represented as the pairs under component-wise addition modulo 2, the Klein four-group is thus an example of an elementary abelian 2-group, which is also called a Boolean group. Another numerical construction of the Klein four-group is the set, with the operation being multiplication modulo 8, here a is 3, b is 5, and c = ab is 3 ×5 =15 ≡7. The three elements of two in the Klein four-group are interchangeable, the automorphism group of V is the group of permutations of these three elements. In fact, it is the kernel of a group homomorphism from S4 to S3. In the construction of finite rings, eight of the rings with four elements have the Klein four-group as their additive substructure. The quotient group / is isomorphic to the Klein four-group, in a similar fashion, the group of units of the split-complex number ring, when divided by its identity component, also results in the Klein four-group. The Klein four-group as a subgroup of the alternating group A4 is not the group of any simple graph. It is, however, the group of a two-vertex graph where the vertices are connected to each other with two edges, making the graph non-simple. A. Armstrong Groups and Symmetry, Springer Verlag, page 53, W. E. Barnes Introduction to Abstract Algebra, D. C
25.
Heron's method
–
In numerical analysis, a branch of mathematics, there are several square root algorithms or methods of computing the principal square root of a non-negative real number. For the square roots of a negative or complex number, see below, finding S is the same as solving the equation f = x 2 − S =0 for a positive x. Therefore, any general numerical root-finding algorithm can be used, many square root algorithms require an initial seed value. If the initial seed value is far away from the square root. It is therefore useful to have an estimate, which may be very inaccurate. For S =125348 =12.5348 ×104, for S =125348 =111101001101001002 =1.11101001101001002 ×216 the binary approximation gives S ≈28 =1000000002 =256. These approximations are useful to find better seeds for iterative algorithms and it can be derived from Newtons method. The process of updating is iterated until desired accuracy is obtained and this is a quadratically convergent algorithm, which means that the number of correct digits of the approximation roughly doubles with each iteration. It proceeds as follows, Begin with a positive starting value x0. Let xn +1 be the average of xn and S/xn, repeat step 2 until the desired accuracy is achieved. It can also be represented as, x 0 ≈ S, x n +1 =12, S = lim n → ∞ x n. Let the relative error in xn be defined by ε n = x n S −1 and thus x n = S ⋅. Then it can be shown that ε n +1 = ε n 22 and thus that 0 ≤ ε n +2 ≤ min and consequently that convergence is assured provided that x0 and S are both positive. If using the rough estimate above with the Babylonian method, then the least accurate cases in ascending order are as follows, S =1, x 0 =2, x 1 =1.250, ε1 =0.250. S =10, x 0 =2, x 1 =3.500, S =10, x 0 =6, x 1 =3.833, ε1 <0.213. S =100, x 0 =6, x 1 =11.333, thus in any case, ε1 ≤2 −2. ε8 <2 −383 <10 −115, rounding errors will slow the convergence. It is recommended to keep at least one extra digit beyond the desired accuracy of the xn being calculated to minimize round off error and this is a method to find each digit of the square root in a sequence