Vector field
In vector calculus and physics, a vector field is an assignment of a vector to each point in a subset of space. A vector field in the plane, can be visualised as: a collection of arrows with a given magnitude and direction, each attached to a point in the plane. Vector fields are used to model, for example, the speed and direction of a moving fluid throughout space, or the strength and direction of some force, such as the magnetic or gravitational force, as it changes from one point to another point; the elements of differential and integral calculus extend to vector fields. When a vector field represents force, the line integral of a vector field represents the work done by a force moving along a path, under this interpretation conservation of energy is exhibited as a special case of the fundamental theorem of calculus. Vector fields can usefully be thought of as representing the velocity of a moving flow in space, this physical intuition leads to notions such as the divergence and curl. In coordinates, a vector field on a domain in n-dimensional Euclidean space can be represented as a vector-valued function that associates an n-tuple of real numbers to each point of the domain.
This representation of a vector field depends on the coordinate system, there is a well-defined transformation law in passing from one coordinate system to the other. Vector fields are discussed on open subsets of Euclidean space, but make sense on other subsets such as surfaces, where they associate an arrow tangent to the surface at each point. More vector fields are defined on differentiable manifolds, which are spaces that look like Euclidean space on small scales, but may have more complicated structure on larger scales. In this setting, a vector field gives a tangent vector at each point of the manifold. Vector fields are one kind of tensor field. Given a subset S in Rn, a vector field is represented by a vector-valued function V: S → Rn in standard Cartesian coordinates. If each component of V is continuous V is a continuous vector field, more V is a Ck vector field if each component of V is k times continuously differentiable. A vector field can be visualized as assigning a vector to individual points within an n-dimensional space.
Given two Ck-vector fields V, W defined on S and a real valued Ck-function f defined on S, the two operations scalar multiplication and vector addition:= f V:= V + W define the module of Ck-vector fields over the ring of Ck-functions where the multiplication of the functions is defined pointwise. In physics, a vector is additionally distinguished by how its coordinates change when one measures the same vector with respect to a different background coordinate system; the transformation properties of vectors distinguish a vector as a geometrically distinct entity from a simple list of scalars, or from a covector. Thus, suppose, a choice of Cartesian coordinates, in terms of which the components of the vector V are V x = and suppose that are n functions of the xi defining a different coordinate system; the components of the vector V in the new coordinates are required to satisfy the transformation law Such a transformation law is called contravariant. A similar transformation law characterizes vector fields in physics: a vector field is a specification of n functions in each coordinate system subject to the transformation law relating the different coordinate systems.
Vector fields are thus contrasted with scalar fields, which associate a number or scalar to every point in space, are contrasted with simple lists of scalar fields, which do not transform under coordinate changes. Given a differentiable manifold M, a vector field on M is an assignment of a tangent vector to each point in M. More a vector field F is a mapping from M into the tangent bundle TM so that p ∘ F is the identity mapping where p denotes the projection from TM to M. In other words, a vector field is a section of the tangent bundle. An alternative definition: A smooth vector field X on a manifold M is a linear map X: C ∞ → C ∞ such that X is a derivation: X = f X + X g for all f, g ∈ C ∞. If the manifold M is smooth or analytic—that is, the change of coordinates is smooth —then one can make sense of the notion of smooth vector fields; the c
Scalar potential
Scalar potential stated, describes the situation where the difference in the potential energies of an object in two different positions depends only on the positions, not upon the path taken by the object in traveling from one position to the other. It is a scalar field in three-space: a directionless value. A familiar example is potential energy due to gravity. A scalar potential is a fundamental concept in vector physics; the scalar potential is an example of a scalar field. Given a vector field F, the scalar potential P is defined such that: F = − ∇ P = −, where ∇P is the gradient of P and the second part of the equation is minus the gradient for a function of the Cartesian coordinates x, y, z. In some cases, mathematicians may use a positive sign in front of the gradient to define the potential; because of this definition of P in terms of the gradient, the direction of F at any point is the direction of the steepest decrease of P at that point, its magnitude is the rate of that decrease per unit length.
In order for F to be described in terms of a scalar potential only, any of the following equivalent statements have to be true: − ∫ a b F ⋅ d l = P − P, where the integration is over a Jordan arc passing from location a to location b and P is P evaluated at location b. ∮ F ⋅ d l = 0, where the integral is over any simple closed path, otherwise known as a Jordan curve. ∇ × F = 0. The first of these conditions represents the fundamental theorem of the gradient and is true for any vector field, a gradient of a differentiable single valued scalar field P; the second condition is a requirement of F so that it can be expressed as the gradient of a scalar function. The third condition re-expresses the second condition in terms of the curl of F using the fundamental theorem of the curl. A vector field F that satisfies these conditions is said to be irrotational. Scalar potentials play a prominent role in many areas of engineering; the gravity potential is the scalar potential associated with the gravity per unit mass, i.e. the acceleration due to the field, as a function of position.
The gravity potential is the gravitational potential energy per unit mass. In electrostatics the electric potential is the scalar potential associated with the electric field, i.e. with the electrostatic force per unit charge. The electric potential is in this case the electrostatic potential energy per unit charge. In fluid dynamics, irrotational lamellar fields have a scalar potential only in the special case when it is a Laplacian field. Certain aspects of the nuclear force can be described by a Yukawa potential; the potential play a prominent role in the Lagrangian and Hamiltonian formulations of classical mechanics. Further, the scalar potential is the fundamental quantity in quantum mechanics. Not every vector field has a scalar potential; those that do are called conservative, corresponding to the notion of conservative force in physics. Examples of non-conservative forces include frictional forces, magnetic forces, in fluid mechanics a solenoidal field velocity field. By the Helmholtz decomposition theorem however, all vector fields can be describable in terms of a scalar potential and corresponding vector potential.
In electrodynamics, the electromagnetic scalar and vector potentials are known together as the electromagnetic four-potential. If F is a conservative vector field, its components have continuous partial derivatives, the potential of F with respect to a reference point r 0 is defined in terms of the line integral: V = − ∫ C F ⋅ d r = − ∫ a b F ⋅ r ′ d t, where C is a parametrized path from r 0 to r, r, a ≤ t ≤ b, r = r 0, r = r. {\displaystyle \mathbf,a\leq t\l
Pythagorean theorem
In mathematics, the Pythagorean theorem known as Pythagoras' theorem, is a fundamental relation in Euclidean geometry among the three sides of a right triangle. It states that the square of the hypotenuse is equal to the sum of the squares of the other two sides; the theorem can be written as an equation relating the lengths of the sides a, b and c called the "Pythagorean equation": a 2 + b 2 = c 2, where c represents the length of the hypotenuse and a and b the lengths of the triangle's other two sides. Although it is argued that knowledge of the theorem predates him, the theorem is named after the ancient Greek mathematician Pythagoras as it is he who, by tradition, is credited with its first proof, although no evidence of it exists. There is some evidence that Babylonian mathematicians understood the formula, although little of it indicates an application within a mathematical framework. Mesopotamian and Chinese mathematicians all discovered the theorem independently and, in some cases, provided proofs for special cases.
The theorem has been given numerous proofs – the most for any mathematical theorem. They are diverse, including both geometric proofs and algebraic proofs, with some dating back thousands of years; the theorem can be generalized in various ways, including higher-dimensional spaces, to spaces that are not Euclidean, to objects that are not right triangles, indeed, to objects that are not triangles at all, but n-dimensional solids. The Pythagorean theorem has attracted interest outside mathematics as a symbol of mathematical abstruseness, mystique, or intellectual power; the Pythagorean theorem was known long before Pythagoras, but he may well have been the first to prove it. In any event, the proof attributed to him is simple, is called a proof by rearrangement; the two large squares shown in the figure each contain four identical triangles, the only difference between the two large squares is that the triangles are arranged differently. Therefore, the white space within each of the two large squares must have equal area.
Equating the area of the white space yields the Pythagorean theorem, Q. E. D; that Pythagoras originated this simple proof is sometimes inferred from the writings of the Greek philosopher and mathematician Proclus. Several other proofs of this theorem are described below. If c denotes the length of the hypotenuse and a and b denote the lengths of the other two sides, the Pythagorean theorem can be expressed as the Pythagorean equation: a 2 + b 2 = c 2. If the length of both a and b are known c can be calculated as c = a 2 + b 2. If the length of the hypotenuse c and of one side are known the length of the other side can be calculated as a = c 2 − b 2 or b = c 2 − a 2; the Pythagorean equation relates the sides of a right triangle in a simple way, so that if the lengths of any two sides are known the length of the third side can be found. Another corollary of the theorem is that in any right triangle, the hypotenuse is greater than any one of the other sides, but less than their sum. A generalization of this theorem is the law of cosines, which allows the computation of the length of any side of any triangle, given the lengths of the other two sides and the angle between them.
If the angle between the other sides is a right angle, the law of cosines reduces to the Pythagorean equation. This theorem may have more known proofs than any other; this proof is based on the proportionality of the sides of two similar triangles, that is, upon the fact that the ratio of any two corresponding sides of similar triangles is the same regardless of the size of the triangles. Let ABC represent a right triangle, with the right angle located at C. Draw the altitude from point C, call H its intersection with the side AB. Point H divides the length of the hypotenuse c into parts d and e; the new triangle ACH is similar to triangle ABC, because they both have a right angle, they share the angle at A, meaning that the third angle will be the same in both triangles as well, marked as θ in the figure. By a similar reasoning, the triangle CBH is similar to ABC; the proof of similarity of the triangles requires the triangle postulate: the sum of the angles in a triangle is two right angles, is equivalent to the parallel postulate.
Similarity of the triangles leads to the equality of ratios of corresponding sides: B C A B = B H B C and A C A B = A H A C. The first result equates
Monotonic function
In mathematics, a monotonic function is a function between ordered sets that preserves or reverses the given order. This concept first arose in calculus, was generalized to the more abstract setting of order theory. In calculus, a function f defined on a subset of the real numbers with real values is called monotonic if and only if it is either non-increasing, or non-decreasing; that is, as per Fig. 1, a function that increases monotonically does not have to increase, it must not decrease. A function is called monotonically increasing, if for all x and y such that x ≤ y one has f ≤ f, so f preserves the order. A function is called monotonically decreasing if, whenever x ≤ y f ≥ f, so it reverses the order. If the order ≤ in the definition of monotonicity is replaced by the strict order < one obtains a stronger requirement. A function with this property is called increasing. Again, by inverting the order symbol, one finds a corresponding concept called decreasing. Functions that are increasing or decreasing are one-to-one If it is not clear that "increasing" and "decreasing" are taken to include the possibility of repeating the same value at successive arguments, one may use the terms weakly increasing and weakly decreasing to stress this possibility.
The terms "non-decreasing" and "non-increasing" should not be confused with the negative qualifications "not decreasing" and "not increasing". For example, the function of figure 3 first falls rises falls again, it is therefore not decreasing and not increasing, but it is neither non-decreasing nor non-increasing. A function f is said to be monotonic over an interval if the derivatives of all orders of f are nonnegative or all nonpositive at all points on the interval; the term monotonic transformation can possibly cause some confusion because it refers to a transformation by a increasing function. This is the case in economics with respect to the ordinal properties of a utility function being preserved across a monotonic transform. In this context, what we are calling a "monotonic transformation" is, more called a "positive monotonic transformation", in order to distinguish it from a “negative monotonic transformation,” which reverses the order of the numbers; the following properties are true for a monotonic function f: R → R: f has limits from the right and from the left at every point of its domain.
F can only have jump discontinuities. The discontinuities, however, do not consist of isolated points and may be dense in an interval; these properties are the reason. Two facts about these functions are: if f is a monotonic function defined on an interval I f is differentiable everywhere on I, i.e. the set of numbers x in I such that f is not differentiable in x has Lebesgue measure zero. In addition, this result cannot be improved to countable: see Cantor function. If f is a m
Least squares
The method of least squares is a standard approach in regression analysis to approximate the solution of overdetermined systems, i.e. sets of equations in which there are more equations than unknowns. "Least squares" means that the overall solution minimizes the sum of the squares of the residuals made in the results of every single equation. The most important application is in data fitting; the best fit in the least-squares sense minimizes the sum of squared residuals. When the problem has substantial uncertainties in the independent variable simple regression and least-squares methods have problems. Least-squares problems fall into two categories: linear or ordinary least squares and nonlinear least squares, depending on whether or not the residuals are linear in all unknowns; the linear least-squares problem occurs in statistical regression analysis. The nonlinear problem is solved by iterative refinement. Polynomial least squares describes the variance in a prediction of the dependent variable as a function of the independent variable and the deviations from the fitted curve.
When the observations come from an exponential family and mild conditions are satisfied, least-squares estimates and maximum-likelihood estimates are identical. The method of least squares can be derived as a method of moments estimator; the following discussion is presented in terms of linear functions but the use of least squares is valid and practical for more general families of functions. By iteratively applying local quadratic approximation to the likelihood, the least-squares method may be used to fit a generalized linear model; the least-squares method is credited to Carl Friedrich Gauss, but it was first published by Adrien-Marie Legendre. The method of least squares grew out of the fields of astronomy and geodesy, as scientists and mathematicians sought to provide solutions to the challenges of navigating the Earth's oceans during the Age of Exploration; the accurate description of the behavior of celestial bodies was the key to enabling ships to sail in open seas, where sailors could no longer rely on land sightings for navigation.
The method was the culmination of several advances that took place during the course of the eighteenth century: The combination of different observations as being the best estimate of the true value. The combination of different observations taken under the same conditions contrary to trying one's best to observe and record a single observation accurately; the approach was known as the method of averages. This approach was notably used by Tobias Mayer while studying the librations of the moon in 1750, by Pierre-Simon Laplace in his work in explaining the differences in motion of Jupiter and Saturn in 1788; the combination of different observations taken under different conditions. The method came to be known as the method of least absolute deviation, it was notably performed by Roger Joseph Boscovich in his work on the shape of the earth in 1757 and by Pierre-Simon Laplace for the same problem in 1799. The development of a criterion that can be evaluated to determine when the solution with the minimum error has been achieved.
Laplace tried to specify a mathematical form of the probability density for the errors and define a method of estimation that minimizes the error of estimation. For this purpose, Laplace used a symmetric two-sided exponential distribution we now call Laplace distribution to model the error distribution, used the sum of absolute deviation as error of estimation, he felt these to be the simplest assumptions he could make, he had hoped to obtain the arithmetic mean as the best estimate. Instead, his estimator was the posterior median; the first clear and concise exposition of the method of least squares was published by Legendre in 1805. The technique is described as an algebraic procedure for fitting linear equations to data and Legendre demonstrates the new method by analyzing the same data as Laplace for the shape of the earth; the value of Legendre's method of least squares was recognized by leading astronomers and geodesists of the time. In 1809 Carl Friedrich Gauss published his method of calculating the orbits of celestial bodies.
In that work he claimed to have been in possession of the method of least squares since 1795. This led to a priority dispute with Legendre. However, to Gauss's credit, he went beyond Legendre and succeeded in connecting the method of least squares with the principles of probability and to the normal distribution, he had managed to complete Laplace's program of specifying a mathematical form of the probability density for the observations, depending on a finite number of unknown parameters, define a method of estimation that minimizes the error of estimation. Gauss showed that the arithmetic mean is indeed the best estimate of the location parameter by changing both the probability density and the method of estimation, he turned the problem around by asking what form the density should have and what method of estimation should be used to get the arithmetic mean as estimate of the location parameter. In this attempt, he invented the normal distribution. An early demonstration of the strength of Gauss's method came when it was used to predict the future location of the newly discovered astero
Euclidean geometry
Euclidean geometry is a mathematical system attributed to Alexandrian Greek mathematician Euclid, which he described in his textbook on geometry: the Elements. Euclid's method consists in assuming a small set of intuitively appealing axioms, deducing many other propositions from these. Although many of Euclid's results had been stated by earlier mathematicians, Euclid was the first to show how these propositions could fit into a comprehensive deductive and logical system; the Elements begins with plane geometry, still taught in secondary school as the first axiomatic system and the first examples of formal proof. It goes on to the solid geometry of three dimensions. Much of the Elements states results of what are now called algebra and number theory, explained in geometrical language. For more than two thousand years, the adjective "Euclidean" was unnecessary because no other sort of geometry had been conceived. Euclid's axioms seemed so intuitively obvious that any theorem proved from them was deemed true in an absolute metaphysical, sense.
Today, many other self-consistent non-Euclidean geometries are known, the first ones having been discovered in the early 19th century. An implication of Albert Einstein's theory of general relativity is that physical space itself is not Euclidean, Euclidean space is a good approximation for it only over short distances. Euclidean geometry is an example of synthetic geometry, in that it proceeds logically from axioms describing basic properties of geometric objects such as points and lines, to propositions about those objects, all without the use of coordinates to specify those objects; this is in contrast to analytic geometry, which uses coordinates to translate geometric propositions into algebraic formulas. The Elements is a systematization of earlier knowledge of geometry, its improvement over earlier treatments was recognized, with the result that there was little interest in preserving the earlier ones, they are now nearly all lost. There are 13 books in the Elements: Books I–IV and VI discuss plane geometry.
Many results about plane figures are proved, for example "In any triangle two angles taken together in any manner are less than two right angles." and the Pythagorean theorem "In right angled triangles the square on the side subtending the right angle is equal to the squares on the sides containing the right angle." Books V and VII–X deal with number theory, with numbers treated geometrically as lengths of line segments or areas of regions. Notions such as prime numbers and rational and irrational numbers are introduced, it is proved. Books XI–XIII concern solid geometry. A typical result is the 1:3 ratio between the volume of a cone and a cylinder with the same height and base; the platonic solids are constructed. Euclidean geometry is an axiomatic system, in which all theorems are derived from a small number of simple axioms; until the advent of non-Euclidean geometry, these axioms were considered to be true in the physical world, so that all the theorems would be true. However, Euclid's reasoning from assumptions to conclusions remains valid independent of their physical reality.
Near the beginning of the first book of the Elements, Euclid gives five postulates for plane geometry, stated in terms of constructions: Let the following be postulated:To draw a straight line from any point to any point. To produce a finite straight line continuously in a straight line. To describe a circle with any centre and distance; that all right angles are equal to one another.: That, if a straight line falling on two straight lines make the interior angles on the same side less than two right angles, the two straight lines, if produced indefinitely, meet on that side on which the angles are less than two right angles. Although Euclid only explicitly asserts the existence of the constructed objects, in his reasoning they are implicitly assumed to be unique; the Elements include the following five "common notions": Things that are equal to the same thing are equal to one another. If equals are added to equals the wholes are equal. If equals are subtracted from equals the differences are equal.
Things that coincide with one another are equal to one another. The whole is greater than the part. Modern scholars agree that Euclid's postulates do not provide the complete logical foundation that Euclid required for his presentation. Modern treatments use more complete sets of axioms. To the ancients, the parallel postulate seemed less obvious than the others, they aspired to create a system of certain propositions, to them it seemed as if the parallel line postulate required proof from simpler statements. It is now known that such a proof is impossible, since one can construct consistent systems of geometry in which the parallel postulate is true, others in which it is false. Euclid himself seems to have considered it as being qualitatively different from the others, as evidenced by the organization of the Elements: his first 28 propositions are those that can be proved without it. Many alternative axioms can be formulated. For example, Playfair's axiom states: In a plane, through a point not on a given straight line, at most one line can be drawn that never meets the giv
Kullback–Leibler divergence
In mathematical statistics, the Kullback–Leibler divergence is a measure of how one probability distribution is different from a second, reference probability distribution. Applications include characterizing the relative entropy in information systems, randomness in continuous time-series, information gain when comparing statistical models of inference. In contrast to variation of information, it is a distribution-wise asymmetric measure and thus does not qualify as a statistical metric of spread. In the simple case, a Kullback–Leibler divergence of 0 indicates that the two distributions in question are identical. In simplified terms, it is a measure of surprise, with diverse applications such as applied statistics, fluid mechanics and machine learning; the Kullback–Leibler divergence was introduced by Solomon Kullback and Richard Leibler in 1951 as the directed divergence between two distributions. The divergence is discussed in Information Theory and Statistics. For discrete probability distributions P and Q defined on the same probability space, the Kullback–Leibler divergence between P and Q is defined to be, equivalent to D KL = ∑ x ∈ X P log .
In other words, it is the expectation of the logarithmic difference between the probabilities P and Q, where the expectation is taken using the probabilities P. The Kullback–Leibler divergence is defined only if for all x, Q = 0 implies P = 0. Whenever P is zero the contribution of the corresponding term is interpreted as zero because lim x → 0 + x log = 0. For distributions P and Q of a continuous random variable, the Kullback–Leibler divergence is defined to be the integral: where p and q denote the probability densities of P and Q. More if P and Q are probability measures over a set X, P is continuous with respect to Q the Kullback–Leibler divergence from Q to P is defined as D KL = ∫ X log d P, where d P d Q is the Radon–Nikodym derivative of P with respect to Q, provided the expression on the right-hand side exists. Equivalently, this can be written as D KL = ∫ X log d P d Q d Q, the entropy of P relative to Q. Continuing in this case, if μ is any measure on X for which p = d P d μ and q = d Q d μ exist the Kullback–Leibler divergence from Q to P is given as D KL = ∫ X p log d μ. {\display