1.
Orthogonal coordinates
–
In mathematics, orthogonal coordinates are defined as a set of d coordinates q = in which the coordinate surfaces all meet at right angles. A coordinate surface for a particular coordinate qk is the curve, surface, orthogonal coordinates are a special but extremely common case of curvilinear coordinates. The chief advantage of non-Cartesian coordinates is that they can be chosen to match the symmetry of the problem, the reason to prefer orthogonal coordinates instead of general curvilinear coordinates is simplicity, many complications arise when coordinates are not orthogonal. For example, in orthogonal coordinates many problems may be solved by separation of variables, separation of variables is a mathematical technique that converts a complex d-dimensional problem into d one-dimensional problems that can be solved in terms of known functions. Many equations can be reduced to Laplaces equation or the Helmholtz equation, Laplaces equation is separable in 13 orthogonal coordinate systems, and the Helmholtz equation is separable in 11 orthogonal coordinate systems. Orthogonal coordinates never have off-diagonal terms in their metric tensor and these scaling functions hi are used to calculate differential operators in the new coordinates, e. g. the gradient, the Laplacian, the divergence and the curl. A simple method for generating orthogonal coordinates systems in two dimensions is by a mapping of a standard two-dimensional grid of Cartesian coordinates. A complex number z = x + iy can be formed from the coordinates x and y. However, there are other orthogonal coordinate systems in three dimensions that cannot be obtained by projecting or rotating a system, such as the ellipsoidal coordinates. More general orthogonal coordinates may be obtained by starting with some necessary coordinate surfaces, in Cartesian coordinates, the basis vectors are fixed. What distinguishes orthogonal coordinates is that, though the basis vectors vary, note that the vectors are not necessarily of equal length. The useful functions known as factors of the coordinates are simply the lengths h i of the basis vectors e ^ i. The scale factors are sometimes called Lamé coefficients, but this terminology is best avoided since some more well known coefficients in linear elasticity carry the same name. Components in the basis are most common in applications for clarity of the quantities. The basis vectors shown above are covariant basis vectors, while a vector is an objective quantity, meaning its identity is independent of any coordinate system, the components of a vector depend on what basis the vector is represented in. Note that the summation symbols Σ and the range, indicating summation over all basis vectors, are often omitted. Vector addition and negation are done component-wise just as in Cartesian coordinates with no complication, extra considerations may be necessary for other vector operations. Note however, that all of these operations assume that two vectors in a field are bound to the same point
Orthogonal coordinates
–
Visualization of 2D orthogonal coordinates. Curves obtained by holding all but one coordinate constant are shown, along with basis vectors. Note that the basis vectors aren't of equal length: they need not be, they only need to be orthogonal.
2.
Skew coordinates
–
A system of skew coordinates is a curvilinear coordinate system where the coordinate surfaces are not orthogonal, in contrast to orthogonal coordinates. These coordinate systems can be if the geometry of a problem fits well into a skewed system. For example, solving Laplaces equation in a parallelogram will be easiest when done in appropriately skewed coordinates. The simplest 3D case of a coordinate system is a Cartesian one where one of the axes has been bent by some angle ϕ. For this example, the x axis of a Cartesian coordinate has been bent toward the z axis by ϕ, let e 1, e 2, and e 3 respectively be unit vectors along the x, y, and z axes. Well favor writing quantities with respect to the covariant basis, since the basis vectors are all constant, vector addition and subtraction will simply be familiar component-wise adding and subtraction. Now, let a = ∑ i a i e i and b = ∑ i b i e i where the sums indicate summation over all values of the index. Finally, the curl of a vector is ∇ × a = ∑ i, j, k e k ϵ i j k ∂ a i ∂ q i =1 cos
Skew coordinates
–
A Cartesian coordinate system where the x axis has been bent toward the z axis.
3.
Cartesian coordinate system
–
Each reference line is called a coordinate axis or just axis of the system, and the point where they meet is its origin, usually at ordered pair. The coordinates can also be defined as the positions of the projections of the point onto the two axis, expressed as signed distances from the origin. One can use the principle to specify the position of any point in three-dimensional space by three Cartesian coordinates, its signed distances to three mutually perpendicular planes. In general, n Cartesian coordinates specify the point in an n-dimensional Euclidean space for any dimension n and these coordinates are equal, up to sign, to distances from the point to n mutually perpendicular hyperplanes. The invention of Cartesian coordinates in the 17th century by René Descartes revolutionized mathematics by providing the first systematic link between Euclidean geometry and algebra. Using the Cartesian coordinate system, geometric shapes can be described by Cartesian equations, algebraic equations involving the coordinates of the points lying on the shape. For example, a circle of radius 2, centered at the origin of the plane, a familiar example is the concept of the graph of a function. Cartesian coordinates are also tools for most applied disciplines that deal with geometry, including astronomy, physics, engineering. They are the most common system used in computer graphics, computer-aided geometric design. Nicole Oresme, a French cleric and friend of the Dauphin of the 14th Century, used similar to Cartesian coordinates well before the time of Descartes. The adjective Cartesian refers to the French mathematician and philosopher René Descartes who published this idea in 1637 and it was independently discovered by Pierre de Fermat, who also worked in three dimensions, although Fermat did not publish the discovery. Both authors used a single axis in their treatments and have a length measured in reference to this axis. The concept of using a pair of axes was introduced later, after Descartes La Géométrie was translated into Latin in 1649 by Frans van Schooten and these commentators introduced several concepts while trying to clarify the ideas contained in Descartes work. Many other coordinate systems have developed since Descartes, such as the polar coordinates for the plane. The development of the Cartesian coordinate system would play a role in the development of the Calculus by Isaac Newton. The two-coordinate description of the plane was later generalized into the concept of vector spaces. Choosing a Cartesian coordinate system for a one-dimensional space – that is, for a straight line—involves choosing a point O of the line, a unit of length, and an orientation for the line. An orientation chooses which of the two half-lines determined by O is the positive, and which is negative, we say that the line is oriented from the negative half towards the positive half
Cartesian coordinate system
–
The
right hand rule.
Cartesian coordinate system
–
Illustration of a Cartesian coordinate plane. Four points are marked and labeled with their coordinates: (2,3) in green, (−3,1) in red, (−1.5,−2.5) in blue, and the origin (0,0) in purple.
Cartesian coordinate system
–
3D Cartesian Coordinate Handedness
4.
Geometry
–
Geometry is a branch of mathematics concerned with questions of shape, size, relative position of figures, and the properties of space. A mathematician who works in the field of geometry is called a geometer, Geometry arose independently in a number of early cultures as a practical way for dealing with lengths, areas, and volumes. Geometry began to see elements of mathematical science emerging in the West as early as the 6th century BC. By the 3rd century BC, geometry was put into a form by Euclid, whose treatment, Euclids Elements. Geometry arose independently in India, with texts providing rules for geometric constructions appearing as early as the 3rd century BC, islamic scientists preserved Greek ideas and expanded on them during the Middle Ages. By the early 17th century, geometry had been put on a solid footing by mathematicians such as René Descartes. Since then, and into modern times, geometry has expanded into non-Euclidean geometry and manifolds, while geometry has evolved significantly throughout the years, there are some general concepts that are more or less fundamental to geometry. These include the concepts of points, lines, planes, surfaces, angles, contemporary geometry has many subfields, Euclidean geometry is geometry in its classical sense. The mandatory educational curriculum of the majority of nations includes the study of points, lines, planes, angles, triangles, congruence, similarity, solid figures, circles, Euclidean geometry also has applications in computer science, crystallography, and various branches of modern mathematics. Differential geometry uses techniques of calculus and linear algebra to problems in geometry. It has applications in physics, including in general relativity, topology is the field concerned with the properties of geometric objects that are unchanged by continuous mappings. In practice, this often means dealing with large-scale properties of spaces, convex geometry investigates convex shapes in the Euclidean space and its more abstract analogues, often using techniques of real analysis. It has close connections to convex analysis, optimization and functional analysis, algebraic geometry studies geometry through the use of multivariate polynomials and other algebraic techniques. It has applications in areas, including cryptography and string theory. Discrete geometry is concerned mainly with questions of relative position of simple objects, such as points. It shares many methods and principles with combinatorics, Geometry has applications to many fields, including art, architecture, physics, as well as to other branches of mathematics. The earliest recorded beginnings of geometry can be traced to ancient Mesopotamia, the earliest known texts on geometry are the Egyptian Rhind Papyrus and Moscow Papyrus, the Babylonian clay tablets such as Plimpton 322. For example, the Moscow Papyrus gives a formula for calculating the volume of a truncated pyramid, later clay tablets demonstrate that Babylonian astronomers implemented trapezoid procedures for computing Jupiters position and motion within time-velocity space
Geometry
–
Visual checking of the
Pythagorean theorem for the (3, 4, 5)
triangle as in the
Chou Pei Suan Ching 500–200 BC.
Geometry
–
An illustration of
Desargues' theorem, an important result in
Euclidean and
projective geometry
Geometry
–
Geometry lessons in the 20th century
Geometry
–
A
European and an
Arab practicing geometry in the 15th century.
5.
Coordinate system
–
The order of the coordinates is significant, and they are sometimes identified by their position in an ordered tuple and sometimes by a letter, as in the x-coordinate. The coordinates are taken to be real numbers in elementary mathematics, the use of a coordinate system allows problems in geometry to be translated into problems about numbers and vice versa, this is the basis of analytic geometry. The simplest example of a system is the identification of points on a line with real numbers using the number line. In this system, an arbitrary point O is chosen on a given line. The coordinate of a point P is defined as the distance from O to P. Each point is given a unique coordinate and each number is the coordinate of a unique point. The prototypical example of a system is the Cartesian coordinate system. In the plane, two lines are chosen and the coordinates of a point are taken to be the signed distances to the lines. In three dimensions, three perpendicular planes are chosen and the three coordinates of a point are the distances to each of the planes. This can be generalized to create n coordinates for any point in n-dimensional Euclidean space, depending on the direction and order of the coordinate axis the system may be a right-hand or a left-hand system. This is one of many coordinate systems, another common coordinate system for the plane is the polar coordinate system. A point is chosen as the pole and a ray from this point is taken as the polar axis, for a given angle θ, there is a single line through the pole whose angle with the polar axis is θ. Then there is a point on this line whose signed distance from the origin is r for given number r. For a given pair of coordinates there is a single point, for example, and are all polar coordinates for the same point. The pole is represented by for any value of θ, there are two common methods for extending the polar coordinate system to three dimensions. In the cylindrical coordinate system, a z-coordinate with the meaning as in Cartesian coordinates is added to the r and θ polar coordinates giving a triple. Spherical coordinates take this a further by converting the pair of cylindrical coordinates to polar coordinates giving a triple. A point in the plane may be represented in coordinates by a triple where x/z and y/z are the Cartesian coordinates of the point
Coordinate system
–
The
spherical coordinate system is commonly used in physics. It assigns three numbers (known as coordinates) to every point in
Euclidean space: radial distance r, polar angle θ (
theta), and azimuthal angle φ (
phi). The symbol ρ (
rho) is often used instead of r.
6.
Euclidean space
–
In geometry, Euclidean space encompasses the two-dimensional Euclidean plane, the three-dimensional space of Euclidean geometry, and certain other spaces. It is named after the Ancient Greek mathematician Euclid of Alexandria, the term Euclidean distinguishes these spaces from other types of spaces considered in modern geometry. Euclidean spaces also generalize to higher dimensions, classical Greek geometry defined the Euclidean plane and Euclidean three-dimensional space using certain postulates, while the other properties of these spaces were deduced as theorems. Geometric constructions are used to define rational numbers. It means that points of the space are specified with collections of real numbers and this approach brings the tools of algebra and calculus to bear on questions of geometry and has the advantage that it generalizes easily to Euclidean spaces of more than three dimensions. From the modern viewpoint, there is only one Euclidean space of each dimension. With Cartesian coordinates it is modelled by the coordinate space of the same dimension. In one dimension, this is the line, in two dimensions, it is the Cartesian plane, and in higher dimensions it is a coordinate space with three or more real number coordinates. One way to think of the Euclidean plane is as a set of points satisfying certain relationships, expressible in terms of distance, for example, there are two fundamental operations on the plane. One is translation, which means a shifting of the plane so that point is shifted in the same direction. The other is rotation about a point in the plane. In order to all of this mathematically precise, the theory must clearly define the notions of distance, angle, translation. Even when used in theories, Euclidean space is an abstraction detached from actual physical locations, specific reference frames, measurement instruments. The standard way to such space, as carried out in the remainder of this article, is to define the Euclidean plane as a two-dimensional real vector space equipped with an inner product. The reason for working with vector spaces instead of Rn is that it is often preferable to work in a coordinate-free manner. Once the Euclidean plane has been described in language, it is actually a simple matter to extend its concept to arbitrary dimensions. For the most part, the vocabulary, formulae, and calculations are not made any more difficult by the presence of more dimensions. Intuitively, the distinction says merely that there is no choice of where the origin should go in the space
Euclidean space
–
A
sphere, the most perfect spatial shape according to
Pythagoreans, also is an important concept in modern understanding of Euclidean spaces
7.
Coordinate line
–
The order of the coordinates is significant, and they are sometimes identified by their position in an ordered tuple and sometimes by a letter, as in the x-coordinate. The coordinates are taken to be real numbers in elementary mathematics, the use of a coordinate system allows problems in geometry to be translated into problems about numbers and vice versa, this is the basis of analytic geometry. The simplest example of a system is the identification of points on a line with real numbers using the number line. In this system, an arbitrary point O is chosen on a given line. The coordinate of a point P is defined as the distance from O to P. Each point is given a unique coordinate and each number is the coordinate of a unique point. The prototypical example of a system is the Cartesian coordinate system. In the plane, two lines are chosen and the coordinates of a point are taken to be the signed distances to the lines. In three dimensions, three perpendicular planes are chosen and the three coordinates of a point are the distances to each of the planes. This can be generalized to create n coordinates for any point in n-dimensional Euclidean space, depending on the direction and order of the coordinate axis the system may be a right-hand or a left-hand system. This is one of many coordinate systems, another common coordinate system for the plane is the polar coordinate system. A point is chosen as the pole and a ray from this point is taken as the polar axis, for a given angle θ, there is a single line through the pole whose angle with the polar axis is θ. Then there is a point on this line whose signed distance from the origin is r for given number r. For a given pair of coordinates there is a single point, for example, and are all polar coordinates for the same point. The pole is represented by for any value of θ, there are two common methods for extending the polar coordinate system to three dimensions. In the cylindrical coordinate system, a z-coordinate with the meaning as in Cartesian coordinates is added to the r and θ polar coordinates giving a triple. Spherical coordinates take this a further by converting the pair of cylindrical coordinates to polar coordinates giving a triple. A point in the plane may be represented in coordinates by a triple where x/z and y/z are the Cartesian coordinates of the point
Coordinate line
–
The
spherical coordinate system is commonly used in physics. It assigns three numbers (known as coordinates) to every point in
Euclidean space: radial distance r, polar angle θ (
theta), and azimuthal angle φ (
phi). The symbol ρ (
rho) is often used instead of r.
8.
Cartesian coordinate
–
Each reference line is called a coordinate axis or just axis of the system, and the point where they meet is its origin, usually at ordered pair. The coordinates can also be defined as the positions of the projections of the point onto the two axis, expressed as signed distances from the origin. One can use the principle to specify the position of any point in three-dimensional space by three Cartesian coordinates, its signed distances to three mutually perpendicular planes. In general, n Cartesian coordinates specify the point in an n-dimensional Euclidean space for any dimension n and these coordinates are equal, up to sign, to distances from the point to n mutually perpendicular hyperplanes. The invention of Cartesian coordinates in the 17th century by René Descartes revolutionized mathematics by providing the first systematic link between Euclidean geometry and algebra. Using the Cartesian coordinate system, geometric shapes can be described by Cartesian equations, algebraic equations involving the coordinates of the points lying on the shape. For example, a circle of radius 2, centered at the origin of the plane, a familiar example is the concept of the graph of a function. Cartesian coordinates are also tools for most applied disciplines that deal with geometry, including astronomy, physics, engineering. They are the most common system used in computer graphics, computer-aided geometric design. Nicole Oresme, a French cleric and friend of the Dauphin of the 14th Century, used similar to Cartesian coordinates well before the time of Descartes. The adjective Cartesian refers to the French mathematician and philosopher René Descartes who published this idea in 1637 and it was independently discovered by Pierre de Fermat, who also worked in three dimensions, although Fermat did not publish the discovery. Both authors used a single axis in their treatments and have a length measured in reference to this axis. The concept of using a pair of axes was introduced later, after Descartes La Géométrie was translated into Latin in 1649 by Frans van Schooten and these commentators introduced several concepts while trying to clarify the ideas contained in Descartes work. Many other coordinate systems have developed since Descartes, such as the polar coordinates for the plane. The development of the Cartesian coordinate system would play a role in the development of the Calculus by Isaac Newton. The two-coordinate description of the plane was later generalized into the concept of vector spaces. Choosing a Cartesian coordinate system for a one-dimensional space – that is, for a straight line—involves choosing a point O of the line, a unit of length, and an orientation for the line. An orientation chooses which of the two half-lines determined by O is the positive, and which is negative, we say that the line is oriented from the negative half towards the positive half
Cartesian coordinate
–
The
right hand rule.
Cartesian coordinate
–
Illustration of a Cartesian coordinate plane. Four points are marked and labeled with their coordinates: (2,3) in green, (−3,1) in red, (−1.5,−2.5) in blue, and the origin (0,0) in purple.
Cartesian coordinate
–
3D Cartesian Coordinate Handedness
9.
Coordinate surfaces
–
The order of the coordinates is significant, and they are sometimes identified by their position in an ordered tuple and sometimes by a letter, as in the x-coordinate. The coordinates are taken to be real numbers in elementary mathematics, the use of a coordinate system allows problems in geometry to be translated into problems about numbers and vice versa, this is the basis of analytic geometry. The simplest example of a system is the identification of points on a line with real numbers using the number line. In this system, an arbitrary point O is chosen on a given line. The coordinate of a point P is defined as the distance from O to P. Each point is given a unique coordinate and each number is the coordinate of a unique point. The prototypical example of a system is the Cartesian coordinate system. In the plane, two lines are chosen and the coordinates of a point are taken to be the signed distances to the lines. In three dimensions, three perpendicular planes are chosen and the three coordinates of a point are the distances to each of the planes. This can be generalized to create n coordinates for any point in n-dimensional Euclidean space, depending on the direction and order of the coordinate axis the system may be a right-hand or a left-hand system. This is one of many coordinate systems, another common coordinate system for the plane is the polar coordinate system. A point is chosen as the pole and a ray from this point is taken as the polar axis, for a given angle θ, there is a single line through the pole whose angle with the polar axis is θ. Then there is a point on this line whose signed distance from the origin is r for given number r. For a given pair of coordinates there is a single point, for example, and are all polar coordinates for the same point. The pole is represented by for any value of θ, there are two common methods for extending the polar coordinate system to three dimensions. In the cylindrical coordinate system, a z-coordinate with the meaning as in Cartesian coordinates is added to the r and θ polar coordinates giving a triple. Spherical coordinates take this a further by converting the pair of cylindrical coordinates to polar coordinates giving a triple. A point in the plane may be represented in coordinates by a triple where x/z and y/z are the Cartesian coordinates of the point
Coordinate surfaces
–
The
spherical coordinate system is commonly used in physics. It assigns three numbers (known as coordinates) to every point in
Euclidean space: radial distance r, polar angle θ (
theta), and azimuthal angle φ (
phi). The symbol ρ (
rho) is often used instead of r.
10.
Spherical coordinates
–
It can be seen as the three-dimensional version of the polar coordinate system. The radial distance is called the radius or radial coordinate. The polar angle may be called colatitude, zenith angle, normal angle, the use of symbols and the order of the coordinates differs between sources. In both systems ρ is often used instead of r, other conventions are also used, so great care needs to be taken to check which one is being used. A number of different spherical coordinate systems following other conventions are used outside mathematics, in a geographical coordinate system positions are measured in latitude, longitude and height or altitude. There are a number of different celestial coordinate systems based on different fundamental planes, the polar angle is often replaced by the elevation angle measured from the reference plane. Elevation angle of zero is at the horizon, the spherical coordinate system generalises the two-dimensional polar coordinate system. It can also be extended to spaces and is then referred to as a hyperspherical coordinate system. To define a coordinate system, one must choose two orthogonal directions, the zenith and the azimuth reference, and an origin point in space. These choices determine a plane that contains the origin and is perpendicular to the zenith. The spherical coordinates of a point P are then defined as follows, the inclination is the angle between the zenith direction and the line segment OP. The azimuth is the angle measured from the azimuth reference direction to the orthogonal projection of the line segment OP on the reference plane. The sign of the azimuth is determined by choosing what is a sense of turning about the zenith. This choice is arbitrary, and is part of the coordinate systems definition, the elevation angle is 90 degrees minus the inclination angle. If the inclination is zero or 180 degrees, the azimuth is arbitrary, if the radius is zero, both azimuth and inclination are arbitrary. In linear algebra, the vector from the origin O to the point P is often called the vector of P. Several different conventions exist for representing the three coordinates, and for the order in which they should be written. The use of to denote radial distance, inclination, and azimuth, respectively, is common practice in physics, and is specified by ISO standard 80000-2,2009, and earlier in ISO 31-11
Spherical coordinates
–
Spherical coordinates (r, θ, φ) as commonly used in physics: radial distance r, polar angle θ (
theta), and azimuthal angle φ (
phi). The symbol ρ (
rho) is often used instead of r.
11.
Coordinate plane
–
The order of the coordinates is significant, and they are sometimes identified by their position in an ordered tuple and sometimes by a letter, as in the x-coordinate. The coordinates are taken to be real numbers in elementary mathematics, the use of a coordinate system allows problems in geometry to be translated into problems about numbers and vice versa, this is the basis of analytic geometry. The simplest example of a system is the identification of points on a line with real numbers using the number line. In this system, an arbitrary point O is chosen on a given line. The coordinate of a point P is defined as the distance from O to P. Each point is given a unique coordinate and each number is the coordinate of a unique point. The prototypical example of a system is the Cartesian coordinate system. In the plane, two lines are chosen and the coordinates of a point are taken to be the signed distances to the lines. In three dimensions, three perpendicular planes are chosen and the three coordinates of a point are the distances to each of the planes. This can be generalized to create n coordinates for any point in n-dimensional Euclidean space, depending on the direction and order of the coordinate axis the system may be a right-hand or a left-hand system. This is one of many coordinate systems, another common coordinate system for the plane is the polar coordinate system. A point is chosen as the pole and a ray from this point is taken as the polar axis, for a given angle θ, there is a single line through the pole whose angle with the polar axis is θ. Then there is a point on this line whose signed distance from the origin is r for given number r. For a given pair of coordinates there is a single point, for example, and are all polar coordinates for the same point. The pole is represented by for any value of θ, there are two common methods for extending the polar coordinate system to three dimensions. In the cylindrical coordinate system, a z-coordinate with the meaning as in Cartesian coordinates is added to the r and θ polar coordinates giving a triple. Spherical coordinates take this a further by converting the pair of cylindrical coordinates to polar coordinates giving a triple. A point in the plane may be represented in coordinates by a triple where x/z and y/z are the Cartesian coordinates of the point
Coordinate plane
–
The
right hand rule.
Coordinate plane
–
Illustration of a Cartesian coordinate plane. Four points are marked and labeled with their coordinates: (2,3) in green, (−3,1) in red, (−1.5,−2.5) in blue, and the origin (0,0) in purple.
Coordinate plane
–
3D Cartesian Coordinate Handedness
12.
Scalar (mathematics)
–
A scalar is an element of a field which is used to define a vector space. A quantity described by multiple scalars, such as having both direction and magnitude, is called a vector, more generally, a vector space may be defined by using any field instead of real numbers, such as complex numbers. Then the scalars of that space will be the elements of the associated field. A scalar product operation – not to be confused with scalar multiplication – may be defined on a vector space, a vector space equipped with a scalar product is called an inner product space. The real component of a quaternion is also called its scalar part, the term is also sometimes used informally to mean a vector, matrix, tensor, or other usually compound value that is actually reduced to a single component. Thus, for example, the product of a 1×n matrix and an n×1 matrix, the term scalar matrix is used to denote a matrix of the form kI where k is a scalar and I is the identity matrix. The word scalar derives from the Latin word scalaris, a form of scala. The English word scale also comes from scala, according to a citation in the Oxford English Dictionary the first recorded usage of the term scalar in English came with W. R. A vector space is defined as a set of vectors, a set of scalars, and a multiplication operation that takes a scalar k. For example, in a space, the scalar multiplication k yields. In a function space, kƒ is the function x ↦ k, the scalars can be taken from any field, including the rational, algebraic, real, and complex numbers, as well as finite fields. According to a theorem of linear algebra, every vector space has a basis. It follows that every vector space over a scalar field K is isomorphic to a vector space where the coordinates are elements of K. For example, every vector space of dimension n is isomorphic to n-dimensional real space Rn. Alternatively, a vector space V can be equipped with a function that assigns to every vector v in V a scalar ||v||. By definition. If ||v|| is interpreted as the length of v, this operation can be described as scaling the length of v by k, a vector space equipped with a norm is called a normed vector space. The norm is defined to be an element of Vs scalar field K. Moreover, if V has dimension 2 or more, K must be closed under square root, as well as the four operations, thus the rational numbers Q are excluded
Scalar (mathematics)
–
Scalars are
real numbers used in linear algebra, as opposed to
vectors. This image shows a
Euclidean vector. Its coordinates x and y are scalars, as is its length, but v is not a scalar.
13.
Vector (geometric)
–
In mathematics, physics, and engineering, a Euclidean vector is a geometric object that has magnitude and direction. Vectors can be added to other vectors according to vector algebra, a Euclidean vector is frequently represented by a line segment with a definite direction, or graphically as an arrow, connecting an initial point A with a terminal point B, and denoted by A B →. A vector is what is needed to carry the point A to the point B and it was first used by 18th century astronomers investigating planet rotation around the Sun. The magnitude of the vector is the distance between the two points and the direction refers to the direction of displacement from A to B. These operations and associated laws qualify Euclidean vectors as an example of the more generalized concept of vectors defined simply as elements of a vector space. Vectors play an important role in physics, the velocity and acceleration of a moving object, many other physical quantities can be usefully thought of as vectors. Although most of them do not represent distances, their magnitude and direction can still be represented by the length, the mathematical representation of a physical vector depends on the coordinate system used to describe it. Other vector-like objects that describe physical quantities and transform in a similar way under changes of the system include pseudovectors and tensors. The concept of vector, as we know it today, evolved gradually over a period of more than 200 years, about a dozen people made significant contributions. Giusto Bellavitis abstracted the basic idea in 1835 when he established the concept of equipollence, working in a Euclidean plane, he made equipollent any pair of line segments of the same length and orientation. Essentially he realized an equivalence relation on the pairs of points in the plane, the term vector was introduced by William Rowan Hamilton as part of a quaternion, which is a sum q = s + v of a Real number s and a 3-dimensional vector. Like Bellavitis, Hamilton viewed vectors as representative of classes of equipollent directed segments, grassmanns work was largely neglected until the 1870s. Peter Guthrie Tait carried the standard after Hamilton. His 1867 Elementary Treatise of Quaternions included extensive treatment of the nabla or del operator ∇, in 1878 Elements of Dynamic was published by William Kingdon Clifford. Clifford simplified the quaternion study by isolating the dot product and cross product of two vectors from the complete quaternion product and this approach made vector calculations available to engineers and others working in three dimensions and skeptical of the fourth. Josiah Willard Gibbs, who was exposed to quaternions through James Clerk Maxwells Treatise on Electricity and Magnetism, the first half of Gibbss Elements of Vector Analysis, published in 1881, presents what is essentially the modern system of vector analysis. In 1901 Edwin Bidwell Wilson published Vector Analysis, adapted from Gibbs lectures, in physics and engineering, a vector is typically regarded as a geometric entity characterized by a magnitude and a direction. It is formally defined as a line segment, or arrow
Vector (geometric)
–
This article is about the vectors mainly used in physics and engineering to represent directed quantities. For mathematical vectors in general, see
Vector (mathematics and physics).
14.
Tensor
–
In mathematics, tensors are geometric objects that describe linear relations between geometric vectors, scalars, and other tensors. Elementary examples of such include the dot product, the cross product. Geometric vectors, often used in physics and engineering applications, given a coordinate basis or fixed frame of reference, a tensor can be represented as an organized multidimensional array of numerical values. The order of a tensor is the dimensionality of the array needed to represent it, or equivalently, for example, a linear map is represented by a matrix in a basis, and therefore is a 2nd-order tensor. A vector is represented as a 1-dimensional array in a basis, scalars are single numbers and are thus 0th-order tensors. Because they express a relationship between vectors, tensors themselves must be independent of a choice of coordinate system. The precise form of the transformation law determines the type of the tensor, the tensor type is a pair of natural numbers, where n is the number of contravariant indices and m is the number of covariant indices. The total order of a tensor is the sum of two numbers. The concept enabled an alternative formulation of the differential geometry of a manifold in the form of the Riemann curvature tensor. There are several approaches to defining tensors, although seemingly different, the approaches just describe the same geometric concept using different languages and at different levels of abstraction. For example, an operator is represented in a basis as a two-dimensional square n × n array. The numbers in the array are known as the scalar components of the tensor or simply its components. They are denoted by giving their position in the array, as subscripts and superscripts. For example, the components of an order 2 tensor T could be denoted Tij , whether an index is displayed as a superscript or subscript depends on the transformation properties of the tensor, described below. The total number of required to identify each component uniquely is equal to the dimension of the array. However, the term generally has another meaning in the context of matrices. Just as the components of a change when we change the basis of the vector space. Each tensor comes equipped with a law that details how the components of the tensor respond to a change of basis
Tensor
–
Cauchy stress tensor, a second-order tensor. The tensor's components, in a three-dimensional Cartesian coordinate system, form the matrix whose columns are the stresses (forces per unit area) acting on the e 1, e 2, and e 3 faces of the cube.
15.
Tensor analysis
–
In mathematics, tensors are geometric objects that describe linear relations between geometric vectors, scalars, and other tensors. Elementary examples of such include the dot product, the cross product. Geometric vectors, often used in physics and engineering applications, given a coordinate basis or fixed frame of reference, a tensor can be represented as an organized multidimensional array of numerical values. The order of a tensor is the dimensionality of the array needed to represent it, or equivalently, for example, a linear map is represented by a matrix in a basis, and therefore is a 2nd-order tensor. A vector is represented as a 1-dimensional array in a basis, scalars are single numbers and are thus 0th-order tensors. Because they express a relationship between vectors, tensors themselves must be independent of a choice of coordinate system. The precise form of the transformation law determines the type of the tensor, the tensor type is a pair of natural numbers, where n is the number of contravariant indices and m is the number of covariant indices. The total order of a tensor is the sum of two numbers. The concept enabled an alternative formulation of the differential geometry of a manifold in the form of the Riemann curvature tensor. There are several approaches to defining tensors, although seemingly different, the approaches just describe the same geometric concept using different languages and at different levels of abstraction. For example, an operator is represented in a basis as a two-dimensional square n × n array. The numbers in the array are known as the scalar components of the tensor or simply its components. They are denoted by giving their position in the array, as subscripts and superscripts. For example, the components of an order 2 tensor T could be denoted Tij , whether an index is displayed as a superscript or subscript depends on the transformation properties of the tensor, described below. The total number of required to identify each component uniquely is equal to the dimension of the array. However, the term generally has another meaning in the context of matrices. Just as the components of a change when we change the basis of the vector space. Each tensor comes equipped with a law that details how the components of the tensor respond to a change of basis
Tensor analysis
–
Cauchy stress tensor, a second-order tensor. The tensor's components, in a three-dimensional Cartesian coordinate system, form the matrix whose columns are the stresses (forces per unit area) acting on the e 1, e 2, and e 3 faces of the cube.
16.
Gradient
–
In mathematics, the gradient is a multi-variable generalization of the derivative. While a derivative can be defined on functions of a variable, for functions of several variables. The gradient is a function, as opposed to a derivative. If f is a differentiable, real-valued function of several variables, like the derivative, the gradient represents the slope of the tangent of the graph of the function. More precisely, the gradient points in the direction of the greatest rate of increase of the function, the components of the gradient in coordinates are the coefficients of the variables in the equation of the tangent space to the graph. The Jacobian is the generalization of the gradient for vector-valued functions of several variables, a further generalization for a function between Banach spaces is the Fréchet derivative. Consider a room in which the temperature is given by a field, T. At each point in the room, the gradient of T at that point will show the direction in which the temperature rises most quickly, the magnitude of the gradient will determine how fast the temperature rises in that direction. Consider a surface whose height above sea level at point is H, the gradient of H at a point is a vector pointing in the direction of the steepest slope or grade at that point. The steepness of the slope at point is given by the magnitude of the gradient vector. The gradient can also be used to measure how a scalar field changes in other directions, rather than just the direction of greatest change, suppose that the steepest slope on a hill is 40%. If a road goes directly up the hill, then the steepest slope on the road will also be 40%, if, instead, the road goes around the hill at an angle, then it will have a shallower slope. This observation can be stated as follows. If the hill height function H is differentiable, then the gradient of H dotted with a unit vector gives the slope of the hill in the direction of the vector. More precisely, when H is differentiable, the dot product of the gradient of H with a unit vector is equal to the directional derivative of H in the direction of that unit vector. The gradient of a function f is denoted ∇f or ∇→f where ∇ denotes the vector differential operator. The notation grad f is commonly used for the gradient. The gradient of f is defined as the vector field whose dot product with any vector v at each point x is the directional derivative of f along v. That is
Gradient
–
Gradient of the 2-d function f (x, y) = xe −(x 2 + y 2) is plotted as blue arrows over the pseudocolor plot of the function.
Gradient
–
In the above two images, the values of the function are represented in black and white, black representing higher values, and its corresponding gradient is represented by blue arrows.
17.
Boundary conditions
–
In mathematics, in the field of differential equations, a boundary value problem is a differential equation together with a set of additional constraints, called the boundary conditions. A solution to a boundary value problem is a solution to the equation which also satisfies the boundary conditions. Boundary value problems arise in several branches of physics as any physical differential equation will have them, problems involving the wave equation, such as the determination of normal modes, are often stated as boundary value problems. A large class of important boundary value problems are the Sturm–Liouville problems, the analysis of these problems involves the eigenfunctions of a differential operator. To be useful in applications, a boundary value problem should be well posed and this means that given the input to the problem there exists a unique solution, which depends continuously on the input. Much theoretical work in the field of differential equations is devoted to proving that boundary value problems arising from scientific. Among the earliest boundary value problems to be studied is the Dirichlet problem, of finding the harmonic functions, boundary value problems are similar to initial value problems. Finding the temperature at all points of a bar with one end kept at absolute zero. If the problem is dependent on both space and time, one could specify the value of the problem at a point for all time or at a given time for all space. Concretely, an example of a value is the problem y ″ + y =0 to be solved for the unknown function y with the boundary conditions y =0, y =2. Without the boundary conditions, the solution to this equation is y = A sin + B cos . From the boundary condition y =0 one obtains 0 = A ⋅0 + B ⋅1 which implies that B =0, from the boundary condition y =2 one finds 2 = A ⋅1 and so A =2. One sees that imposing boundary conditions allowed one to determine a unique solution, a boundary condition which specifies the value of the function itself is a Dirichlet boundary condition, or first-type boundary condition. For example, if one end of a rod is held at absolute zero. A boundary condition which specifies the value of the derivative of the function is a Neumann boundary condition. For example, if there is a heater at one end of a rod, then energy would be added at a constant rate. If the boundary has the form of a curve or surface that gives a value to the normal derivative and the variable itself then it is a Cauchy boundary condition. Summary of boundary conditions for the function, y, constants c 0 and c 1 specified by the boundary conditions
Boundary conditions
–
Shows a region where a
differential equation is valid and the associated boundary values
18.
Cartography
–
Cartography is the study and practice of making maps. Combining science, aesthetics, and technique, cartography builds on the premise that reality can be modeled in ways that communicate spatial information effectively, the fundamental problems of traditional cartography are to, Set the maps agenda and select traits of the object to be mapped. This is the concern of map editing, traits may be physical, such as roads or land masses, or may be abstract, such as toponyms or political boundaries. Represent the terrain of the object on flat media. This is the concern of map projections, eliminate characteristics of the mapped object that are not relevant to the maps purpose. This is the concern of generalization, reduce the complexity of the characteristics that will be mapped. This is also the concern of generalization, orchestrate the elements of the map to best convey its message to its audience. This is the concern of map design, modern cartography constitutes many theoretical and practical foundations of geographic information systems. The earliest known map is a matter of debate, both because the term map isnt well-defined and because some artifacts that might be maps might actually be something else. A wall painting that might depict the ancient Anatolian city of Çatalhöyük has been dated to the late 7th millennium BCE, the oldest surviving world maps are from 9th century BCE Babylonia. One shows Babylon on the Euphrates, surrounded by Assyria, Urartu and several cities, all, in turn, another depicts Babylon as being north of the world center. The ancient Greeks and Romans created maps since Anaximander in the 6th century BCE, in the 2nd century AD, Ptolemy wrote his treatise on cartography, Geographia. This contained Ptolemys world map – the world known to Western society. As early as the 8th century, Arab scholars were translating the works of the Greek geographers into Arabic, in ancient China, geographical literature dates to the 5th century BCE. The oldest extant Chinese maps come from the State of Qin, dated back to the 4th century BCE, in the book of the Xin Yi Xiang Fa Yao, published in 1092 by the Chinese scientist Su Song, a star map on the equidistant cylindrical projection. Early forms of cartography of India included depictions of the pole star and these charts may have been used for navigation. Mappa mundi are the Medieval European maps of the world, approximately 1,100 mappae mundi are known to have survived from the Middle Ages. Of these, some 900 are found illustrating manuscripts and the remainder exist as stand-alone documents, the Arab geographer Muhammad al-Idrisi produced his medieval atlas Tabula Rogeriana in 1154
Cartography
–
A medieval depiction of the
Ecumene (1482, Johannes Schnitzer, engraver), constructed after the coordinates in Ptolemy's
Geography and using his second map projection. The translation into Latin and dissemination of Geography in Europe, in the beginning of the 15th century, marked the rebirth of scientific cartography, after more than a millennium of stagnation.
Cartography
–
Valcamonica rock art (I), Paspardo r. 29, topographic composition, 4th millennium BC
Cartography
–
The
Bedolina Map and its tracing, 6th–4th century BC
Cartography
–
Copy (1472) of
St. Isidore's TO map of the world.
19.
Physics
–
Physics is the natural science that involves the study of matter and its motion and behavior through space and time, along with related concepts such as energy and force. One of the most fundamental disciplines, the main goal of physics is to understand how the universe behaves. Physics is one of the oldest academic disciplines, perhaps the oldest through its inclusion of astronomy, Physics intersects with many interdisciplinary areas of research, such as biophysics and quantum chemistry, and the boundaries of physics are not rigidly defined. New ideas in physics often explain the mechanisms of other sciences while opening new avenues of research in areas such as mathematics. Physics also makes significant contributions through advances in new technologies that arise from theoretical breakthroughs, the United Nations named 2005 the World Year of Physics. Astronomy is the oldest of the natural sciences, the stars and planets were often a target of worship, believed to represent their gods. While the explanations for these phenomena were often unscientific and lacking in evidence, according to Asger Aaboe, the origins of Western astronomy can be found in Mesopotamia, and all Western efforts in the exact sciences are descended from late Babylonian astronomy. The most notable innovations were in the field of optics and vision, which came from the works of many scientists like Ibn Sahl, Al-Kindi, Ibn al-Haytham, Al-Farisi and Avicenna. The most notable work was The Book of Optics, written by Ibn Al-Haitham, in which he was not only the first to disprove the ancient Greek idea about vision, but also came up with a new theory. In the book, he was also the first to study the phenomenon of the pinhole camera, many later European scholars and fellow polymaths, from Robert Grosseteste and Leonardo da Vinci to René Descartes, Johannes Kepler and Isaac Newton, were in his debt. Indeed, the influence of Ibn al-Haythams Optics ranks alongside that of Newtons work of the same title, the translation of The Book of Optics had a huge impact on Europe. From it, later European scholars were able to build the devices as what Ibn al-Haytham did. From this, such important things as eyeglasses, magnifying glasses, telescopes, Physics became a separate science when early modern Europeans used experimental and quantitative methods to discover what are now considered to be the laws of physics. Newton also developed calculus, the study of change, which provided new mathematical methods for solving physical problems. The discovery of new laws in thermodynamics, chemistry, and electromagnetics resulted from greater research efforts during the Industrial Revolution as energy needs increased, however, inaccuracies in classical mechanics for very small objects and very high velocities led to the development of modern physics in the 20th century. Modern physics began in the early 20th century with the work of Max Planck in quantum theory, both of these theories came about due to inaccuracies in classical mechanics in certain situations. Quantum mechanics would come to be pioneered by Werner Heisenberg, Erwin Schrödinger, from this early work, and work in related fields, the Standard Model of particle physics was derived. Areas of mathematics in general are important to this field, such as the study of probabilities, in many ways, physics stems from ancient Greek philosophy
Physics
–
Further information:
Outline of physics
Physics
–
Ancient
Egyptian astronomy is evident in monuments like the
ceiling of Senemut's tomb from the
Eighteenth Dynasty of Egypt.
Physics
–
Sir Isaac Newton (1643–1727), whose
laws of motion and
universal gravitation were major milestones in classical physics
Physics
–
Albert Einstein (1879–1955), whose work on the
photoelectric effect and the
theory of relativity led to a revolution in 20th century physics
20.
Quantum mechanics
–
Quantum mechanics, including quantum field theory, is a branch of physics which is the fundamental theory of nature at small scales and low energies of atoms and subatomic particles. Classical physics, the physics existing before quantum mechanics, derives from quantum mechanics as an approximation valid only at large scales, early quantum theory was profoundly reconceived in the mid-1920s. The reconceived theory is formulated in various specially developed mathematical formalisms, in one of them, a mathematical function, the wave function, provides information about the probability amplitude of position, momentum, and other physical properties of a particle. In 1803, Thomas Young, an English polymath, performed the famous experiment that he later described in a paper titled On the nature of light. This experiment played a role in the general acceptance of the wave theory of light. In 1838, Michael Faraday discovered cathode rays, Plancks hypothesis that energy is radiated and absorbed in discrete quanta precisely matched the observed patterns of black-body radiation. In 1896, Wilhelm Wien empirically determined a distribution law of black-body radiation, ludwig Boltzmann independently arrived at this result by considerations of Maxwells equations. However, it was only at high frequencies and underestimated the radiance at low frequencies. Later, Planck corrected this model using Boltzmanns statistical interpretation of thermodynamics and proposed what is now called Plancks law, following Max Plancks solution in 1900 to the black-body radiation problem, Albert Einstein offered a quantum-based theory to explain the photoelectric effect. Among the first to study quantum phenomena in nature were Arthur Compton, C. V. Raman, robert Andrews Millikan studied the photoelectric effect experimentally, and Albert Einstein developed a theory for it. In 1913, Peter Debye extended Niels Bohrs theory of structure, introducing elliptical orbits. This phase is known as old quantum theory, according to Planck, each energy element is proportional to its frequency, E = h ν, where h is Plancks constant. Planck cautiously insisted that this was simply an aspect of the processes of absorption and emission of radiation and had nothing to do with the reality of the radiation itself. In fact, he considered his quantum hypothesis a mathematical trick to get the right rather than a sizable discovery. He won the 1921 Nobel Prize in Physics for this work, Einstein further developed this idea to show that an electromagnetic wave such as light could also be described as a particle, with a discrete quantum of energy that was dependent on its frequency. The Copenhagen interpretation of Niels Bohr became widely accepted, in the mid-1920s, developments in quantum mechanics led to its becoming the standard formulation for atomic physics. In the summer of 1925, Bohr and Heisenberg published results that closed the old quantum theory, out of deference to their particle-like behavior in certain processes and measurements, light quanta came to be called photons. From Einsteins simple postulation was born a flurry of debating, theorizing, thus, the entire field of quantum physics emerged, leading to its wider acceptance at the Fifth Solvay Conference in 1927
Quantum mechanics
–
Max Planck is considered the father of the quantum theory.
Quantum mechanics
–
Solution to
Schrödinger's equation for the hydrogen atom at different energy levels. The brighter areas represent a higher probability of finding an
electron
Quantum mechanics
–
The 1927
Solvay Conference in
Brussels.
21.
Engineering
–
The term Engineering is derived from the Latin ingenium, meaning cleverness and ingeniare, meaning to contrive, devise. Engineering has existed since ancient times as humans devised fundamental inventions such as the wedge, lever, wheel, each of these inventions is essentially consistent with the modern definition of engineering. The term engineering is derived from the engineer, which itself dates back to 1390 when an engineer originally referred to a constructor of military engines. In this context, now obsolete, a referred to a military machine. Notable examples of the obsolete usage which have survived to the present day are military engineering corps, the word engine itself is of even older origin, ultimately deriving from the Latin ingenium, meaning innate quality, especially mental power, hence a clever invention. The earliest civil engineer known by name is Imhotep, as one of the officials of the Pharaoh, Djosèr, he probably designed and supervised the construction of the Pyramid of Djoser at Saqqara in Egypt around 2630–2611 BC. Ancient Greece developed machines in both civilian and military domains, the Antikythera mechanism, the first known mechanical computer, and the mechanical inventions of Archimedes are examples of early mechanical engineering. In the Middle Ages, the trebuchet was developed, the first steam engine was built in 1698 by Thomas Savery. The development of this gave rise to the Industrial Revolution in the coming decades. With the rise of engineering as a profession in the 18th century, similarly, in addition to military and civil engineering, the fields then known as the mechanic arts became incorporated into engineering. The inventions of Thomas Newcomen and the Scottish engineer James Watt gave rise to mechanical engineering. The development of specialized machines and machine tools during the revolution led to the rapid growth of mechanical engineering both in its birthplace Britain and abroad. John Smeaton was the first self-proclaimed civil engineer and is regarded as the father of civil engineering. He was an English civil engineer responsible for the design of bridges, canals, harbours and he was also a capable mechanical engineer and an eminent physicist. Smeaton designed the third Eddystone Lighthouse where he pioneered the use of hydraulic lime and his lighthouse remained in use until 1877 and was dismantled and partially rebuilt at Plymouth Hoe where it is known as Smeatons Tower. The United States census of 1850 listed the occupation of engineer for the first time with a count of 2,000, there were fewer than 50 engineering graduates in the U. S. before 1865. In 1870 there were a dozen U. S. mechanical engineering graduates, in 1890 there were 6,000 engineers in civil, mining, mechanical and electrical. There was no chair of applied mechanism and applied mechanics established at Cambridge until 1875, the theoretical work of James Maxwell and Heinrich Hertz in the late 19th century gave rise to the field of electronics
Engineering
–
The
steam engine, a major driver in the
Industrial Revolution, underscores the importance of engineering in modern history. This
beam engine is on display in the
Technical University of Madrid.
Engineering
–
Relief map of the
Citadel of Lille, designed in 1668 by
Vauban, the foremost military engineer of his age.
Engineering
–
The Ancient Romans built
aqueducts to bring a steady supply of clean fresh water to cities and towns in the empire.
Engineering
–
The
International Space Station represents a modern engineering challenge from many disciplines.
22.
Position vector
–
Usually denoted x, r, or s, it corresponds to the straight-line distances along each axis from O to P, r = O P →. The term position vector is used mostly in the fields of geometry, mechanics. Frequently this is used in two-dimensional or three-dimensional space, but can be generalized to Euclidean spaces in any number of dimensions. These different coordinates and corresponding basis vectors represent the position vector. More general curvilinear coordinates could be used instead, and are in contexts like continuum mechanics, linear algebra allows for the abstraction of an n-dimensional position vector. The notion of space is intuitive since each xi can be any value, the dimension of the position space is n. The coordinates of the vector r with respect to the vectors ei are xi. The vector of coordinates forms the coordinate vector or n-tuple, each coordinate xi may be parameterized a number of parameters t. One parameter xi would describe a curved 1D path, two parameters xi describes a curved 2D surface, three xi describes a curved 3D volume of space, and so on. The linear span of a basis set B = equals the position space R, position vector fields are used to describe continuous and differentiable space curves, in which case the independent parameter needs not be time, but can be arc length of the curve. In the case of one dimension, the position has only one component and it could be, say, a vector in the x-direction, or the radial r-direction. Equivalent notations include, x ≡ x ≡ x, r ≡ r, s ≡ s ⋯ For a position vector r that is a function of time t and these derivatives have common utility in the study of kinematics, control theory, engineering and other sciences. Velocity v = d r d t where dr is a small displacement. By extension, the higher order derivatives can be computed in a similar fashion, study of these higher order derivatives can improve approximations of the original displacement function. Such higher-order terms are required in order to represent the displacement function as a sum of an infinite sequence, enabling several analytical techniques in engineering. A displacement vector can be defined as the action of uniformly translating spatial points in a given direction over a given distance, thus the addition of displacement vectors expresses the composition of these displacement actions and scalar multiplication as scaling of the distance. With this in mind we may define a position vector of a point in space as the displacement vector mapping a given origin to that point. Note thus position vectors depend on a choice of origin for the space, affine space Six degrees of freedom Line element Parametric surface Keller, F. J, Gettys, W. E. et al
Position vector
–
Space curve in 3D. The
position vector r is parameterized by a scalar t. At r = a the red line is the tangent to the curve, and the blue plane is normal to the curve.
23.
Standard basis
–
In mathematics, the standard basis for a Euclidean space is the set of unit vectors pointing in the direction of the axes of a Cartesian coordinate system. For example, the basis for the Euclidean plane is formed by vectors e x =, e y =. Here the vector ex points in the x direction, the vector ey points in the y direction, there are several common notations for these vectors, including, and. These vectors are written with a hat to emphasize their status as unit vectors. Each of these vectors is sometimes referred to as the versor of the corresponding Cartesian axis and these vectors are a basis in the sense that any other vector can be expressed uniquely as a linear combination of these. For example, every vector v in three-dimensional space can be written uniquely as v x e x + v y e y + v z e z, the scalars vx, vy, vz being the scalar components of the vector v. In n -dimensional Euclidean space, the standard consists of n distinct vectors. Standard bases can be defined for vector spaces, such as polynomials. In both cases, the standard consists of the elements of the vector space such that all coefficients but one are 0. For polynomials, the standard basis consists of the monomials and is commonly called monomial basis. For matrices M m × n, the standard consists of the m×n-matrices with exactly one non-zero entry. For example, the basis for 2×2 matrices is formed by the 4 matrices e 11 =, e 12 =, e 21 =, e 22 =. By definition, the basis is a sequence of orthogonal unit vectors. In other words, it is an ordered and orthonormal basis, however, an ordered orthonormal basis is not necessarily a standard basis. For instance the two vectors representing a 30° rotation of the 2D standard basis described above, i. e, there is a standard basis also for the ring of polynomials in n indeterminates over a field, namely the monomials. This family is the basis of the R-module R of all families f = from I into a ring R, which are zero except for a finite number of indices, if we interpret 1 as 1R. The existence of standard bases has become a topic of interest in algebraic geometry. It is now a part of theory called standard monomial theory
Standard basis
–
Every vector a in three dimensions is a
linear combination of the standard basis vectors i, j, and k.
24.
Basis (linear algebra)
–
In more general terms, a basis is a linearly independent spanning set. Given a basis of a vector space V, every element of V can be expressed uniquely as a combination of basis vectors. A vector space can have distinct sets of basis vectors, however each such set has the same number of elements. A basis B of a vector space V over a field F is an independent subset of V that spans V. In more detail, suppose that B = is a subset of a vector space V over a field F. The numbers ai are called the coordinates of the vector x with respect to the basis B, a vector space that has a finite basis is called finite-dimensional. To deal with infinite-dimensional spaces, we must generalize the definition to include infinite basis sets. The sums in the definition are all finite because without additional structure the axioms of a vector space do not permit us to meaningfully speak about an infinite sum of vectors. Settings that permit infinite linear combinations allow alternative definitions of the basis concept and it is often convenient to list the basis vectors in a specific order, for example, when considering the transformation matrix of a linear map with respect to a basis. We then speak of a basis, which we define to be a sequence of linearly independent vectors that span V. B is a set of linearly independent vectors, i. e. it is a linearly independent set. Every vector in V can be expressed as a combination of vectors in B in a unique way. If the basis is ordered then the coefficients in this linear combination provide coordinates of the relative to the basis. Every vector space has a basis, the proof of this requires the axiom of choice. All bases of a vector space have the same cardinality, called the dimension of the vector space and this result is known as the dimension theorem, and requires the ultrafilter lemma, a strictly weaker form of the axiom of choice. Also many vector sets can be attributed a standard basis which comprises both spanning and linearly independent vectors, standard bases for example, In Rn, where ei is the ith column of the identity matrix. In P2, where P2 is the set of all polynomials of degree at most 2, is the standard basis. In M22, where M22 is the set of all 2×2 matrices. and Mm, n is the 2×2 matrix with a 1 in the m, n position, given a vector space V over a field F and suppose that and are two bases for V
Basis (linear algebra)
–
This picture illustrates the
standard basis in R 2. The blue and orange vectors are the elements of the basis; the green vector can be given in terms of the basis vectors, and so is
linearly dependent upon them.
25.
Tangent
–
In geometry, the tangent line to a plane curve at a given point is the straight line that just touches the curve at that point. Leibniz defined it as the line through a pair of infinitely close points on the curve, a similar definition applies to space curves and curves in n-dimensional Euclidean space. Similarly, the tangent plane to a surface at a point is the plane that just touches the surface at that point. The concept of a tangent is one of the most fundamental notions in geometry and has been extensively generalized. The word tangent comes from the Latin tangere, to touch, euclid makes several references to the tangent to a circle in book III of the Elements. In Apollonius work Conics he defines a tangent as being a line such that no other straight line could fall between it and the curve, archimedes found the tangent to an Archimedean spiral by considering the path of a point moving along the curve. Independently Descartes used his method of normals based on the observation that the radius of a circle is always normal to the circle itself and these methods led to the development of differential calculus in the 17th century. Many people contributed, Roberval discovered a method of drawing tangents. René-François de Sluse and Johannes Hudde found algebraic algorithms for finding tangents, further developments included those of John Wallis and Isaac Barrow, leading to the theory of Isaac Newton and Gottfried Leibniz. An 1828 definition of a tangent was a line which touches a curve. This old definition prevents inflection points from having any tangent and it has been dismissed and the modern definitions are equivalent to those of Leibniz who defined the tangent line as the line through a pair of infinitely close points on the curve. The tangent at A is the limit when point B approximates or tends to A, the existence and uniqueness of the tangent line depends on a certain type of mathematical smoothness, known as differentiability. At most points, the tangent touches the curve without crossing it, a point where the tangent crosses the curve is called an inflection point. Circles, parabolas, hyperbolas and ellipses do not have any point, but more complicated curves do have, like the graph of a cubic function. Conversely, it may happen that the curve lies entirely on one side of a line passing through a point on it. This is the case, for example, for a passing through the vertex of a triangle. In convex geometry, such lines are called supporting lines, the geometrical idea of the tangent line as the limit of secant lines serves as the motivation for analytical methods that are used to find tangent lines explicitly. The question of finding the tangent line to a graph, or the tangent line problem, was one of the central questions leading to the development of calculus in the 17th century, suppose that a curve is given as the graph of a function, y = f
Tangent
–
Tangent to a curve. The red line is tangential to the curve at the point marked by a red dot.
26.
Tangent bundle
–
In differential geometry, the tangent bundle of a differentiable manifold M is a manifold T M, which assembles all the tangent vectors in M. As a set, it is given by the disjoint union of the tangent spaces of M and that is, T M = ⨆ x ∈ M T x M = ⋃ x ∈ M × T x M = ⋃ x ∈ M =. Where T x M denotes the tangent space to M at the point x, so, an element of T M can be thought of\as a pair, where x is a point in M and v is a tangent vector to M at x. There is a natural projection π, T M ↠ M defined by π = x and this projection maps each tangent space T x M to the single point x. The tangent bundle comes equipped with a natural topology, with this topology, the tangent bundle to a manifold is the prototypical example of a vector bundle. A section of T M is a field on M, and the dual bundle to T M is the cotangent bundle. By definition, a manifold M is parallelizable if and only if the tangent bundle is trivial. By definition, a manifold M is framed if and only if the tangent bundle TM is stably trivial, for example, the n-dimensional sphere Sn is framed for all n, but parallelizable only for n=1,3,7. One of the roles of the tangent bundle is to provide a domain. Namely, if f, M → N is a function, with M and N smooth manifolds, its derivative is a smooth function Df. The tangent bundle comes equipped with a topology and smooth structure so as to make it into a manifold in its own right. The dimension of TM is twice the dimension of M, each tangent space of an n-dimensional manifold is an n-dimensional vector space. If U is an open subset of M, then there is a diffeomorphism from TU to U × Rn which restricts to a linear isomorphism from each tangent space TxU to × Rn. As a manifold, however, TM is not always diffeomorphic to the product manifold M × Rn, when it is of the form M × Rn, then the tangent bundle is said to be trivial. Trivial tangent bundles usually occur for manifolds equipped with a group structure, for instance. The tangent bundle of the circle is trivial because it is a Lie group. It is not true however that all spaces with trivial tangent bundles are Lie groups, just as manifolds are locally modelled on Euclidean space, tangent bundles are locally modelled on U × Rn, where U is an open subset of Euclidean space. If M is a smooth manifold, then it comes equipped with an atlas of charts where Uα is an open set in M and ϕ α, U α → R n is a diffeomorphism
Tangent bundle
–
Informally, the tangent bundle of a manifold (in this case a circle) is obtained by considering all the tangent spaces (top), and joining them together in a smooth and non-overlapping manner (bottom).
27.
Fluid mechanics
–
Fluid mechanics is a branch of physics concerned with the mechanics of fluids and the forces on them. Fluid mechanics has a range of applications, including for mechanical engineering, civil engineering, chemical engineering, geophysics, astrophysics. Fluid mechanics can be divided into fluid statics, the study of fluids at rest, and fluid dynamics, fluid mechanics, especially fluid dynamics, is an active field of research with many problems that are partly or wholly unsolved. Fluid mechanics can be complex, and can best be solved by numerical methods. A modern discipline, called computational fluid dynamics, is devoted to this approach to solving fluid mechanics problems, Particle image velocimetry, an experimental method for visualizing and analyzing fluid flow, also takes advantage of the highly visual nature of fluid flow. Inviscid flow was further analyzed by mathematicians and viscous flow was explored by a multitude of engineers including Jean Léonard Marie Poiseuille. Fluid statics or hydrostatics is the branch of mechanics that studies fluids at rest. It embraces the study of the conditions under which fluids are at rest in stable equilibrium, and is contrasted with fluid dynamics, hydrostatics is fundamental to hydraulics, the engineering of equipment for storing, transporting and using fluids. It is also relevant to some aspect of geophysics and astrophysics, to meteorology, to medicine, fluid dynamics is a subdiscipline of fluid mechanics that deals with fluid flow—the science of liquids and gases in motion. The solution to a fluid dynamics problem typically involves calculating various properties of the fluid, such as velocity, pressure, density and it has several subdisciplines itself, including aerodynamics and hydrodynamics. Some fluid-dynamical principles are used in engineering and crowd dynamics. Fluid mechanics is a subdiscipline of continuum mechanics, as illustrated in the following table, in a mechanical view, a fluid is a substance that does not support shear stress, that is why a fluid at rest has the shape of its containing vessel. A fluid at rest has no shear stress, the assumptions inherent to a fluid mechanical treatment of a physical system can be expressed in terms of mathematical equations. This can be expressed as an equation in integral form over the control volume, the continuum assumption is an idealization of continuum mechanics under which fluids can be treated as continuous, even though, on a microscopic scale, they are composed of molecules. Fluid properties can vary continuously from one element to another and are average values of the molecular properties. The continuum hypothesis can lead to results in applications like supersonic speed flows. Those problems for which the continuum hypothesis fails, can be solved using statistical mechanics, to determine whether or not the continuum hypothesis applies, the Knudsen number, defined as the ratio of the molecular mean free path to the characteristic length scale, is evaluated. Problems with Knudsen numbers below 0.1 can be evaluated using the continuum hypothesis, the Navier–Stokes equations are differential equations that describe the force balance at a given point within a fluid
Fluid mechanics
–
Balance for some integrated fluid quantity in a
control volume enclosed by a
control surface.
28.
Differential geometry
–
Differential geometry is a mathematical discipline that uses the techniques of differential calculus, integral calculus, linear algebra and multilinear algebra to study problems in geometry. The theory of plane and space curves and surfaces in the three-dimensional Euclidean space formed the basis for development of differential geometry during the 18th century, since the late 19th century, differential geometry has grown into a field concerned more generally with the geometric structures on differentiable manifolds. Differential geometry is related to differential topology and the geometric aspects of the theory of differential equations. The differential geometry of surfaces captures many of the key ideas, Differential geometry arose and developed as a result of and in connection to the mathematical analysis of curves and surfaces. These unanswered questions indicated greater, hidden relationships, initially applied to the Euclidean space, further explorations led to non-Euclidean space, and metric and topological spaces. Riemannian geometry studies Riemannian manifolds, smooth manifolds with a Riemannian metric and this is a concept of distance expressed by means of a smooth positive definite symmetric bilinear form defined on the tangent space at each point. Various concepts based on length, such as the arc length of curves, area of plane regions, the notion of a directional derivative of a function from multivariable calculus is extended in Riemannian geometry to the notion of a covariant derivative of a tensor. Many concepts and techniques of analysis and differential equations have been generalized to the setting of Riemannian manifolds, a distance-preserving diffeomorphism between Riemannian manifolds is called an isometry. This notion can also be defined locally, i. e. for small neighborhoods of points, any two regular curves are locally isometric. In higher dimensions, the Riemann curvature tensor is an important pointwise invariant associated with a Riemannian manifold that measures how close it is to being flat, an important class of Riemannian manifolds is the Riemannian symmetric spaces, whose curvature is not necessarily constant. These are the closest analogues to the plane and space considered in Euclidean and non-Euclidean geometry. Pseudo-Riemannian geometry generalizes Riemannian geometry to the case in which the metric tensor need not be positive-definite, a special case of this is a Lorentzian manifold, which is the mathematical basis of Einsteins general relativity theory of gravity. Finsler geometry has the Finsler manifold as the object of study. This is a manifold with a Finsler metric, i. e. a Banach norm defined on each tangent space. Riemannian manifolds are special cases of the more general Finsler manifolds. A Finsler structure on a manifold M is a function F, TM → [0, ∞) such that, F = |m|F for all x, y in TM, F is infinitely differentiable in TM −, symplectic geometry is the study of symplectic manifolds. A symplectic manifold is an almost symplectic manifold for which the symplectic form ω is closed, a diffeomorphism between two symplectic manifolds which preserves the symplectic form is called a symplectomorphism. Non-degenerate skew-symmetric bilinear forms can only exist on even-dimensional vector spaces, in dimension 2, a symplectic manifold is just a surface endowed with an area form and a symplectomorphism is an area-preserving diffeomorphism
Differential geometry
–
A triangle immersed in a saddle-shape plane (a
hyperbolic paraboloid), as well as two diverging
ultraparallel lines.
29.
Total differential
–
In calculus, the differential represents the principal part of the change in a function y = f with respect to changes in the independent variable. The differential dy is defined by d y = f ′ d x, where f ′ is the derivative of f with respect to x, one also writes d f = f ′ d x. The precise meaning of the variables dy and dx depends on the context of the application, traditionally, the variables dx and dy are considered to be very small, and this interpretation is made rigorous in non-standard analysis. The quotient dy/dx is not infinitely small, rather it is a real number, the use of infinitesimals in this form was widely criticized, for instance by the famous pamphlet The Analyst by Bishop Berkeley. Augustin-Louis Cauchy defined the differential without appeal to the atomism of Leibnizs infinitesimals, in physical treatments, such as those applied to the theory of thermodynamics, the infinitesimal view still prevails. Courant & John reconcile the use of infinitesimal differentials with the mathematical impossibility of them as follows. The differentials represent finite non-zero values that are smaller than the degree of accuracy required for the purpose for which they are intended. Thus physical infinitesimals need not appeal to a corresponding mathematical infinitesimal in order to have a precise sense, following twentieth-century developments in mathematical analysis and differential geometry, it became clear that the notion of the differential of a function could be extended in a variety of ways. In real analysis, it is desirable to deal directly with the differential as the principal part of the increment of a function. This leads directly to the notion that the differential of a function at a point is a functional of an increment Δx. This approach allows the differential to be developed for a variety of more sophisticated spaces, in non-standard calculus, differentials are regarded as infinitesimals, which can themselves be put on a rigorous footing. The differential is defined in modern treatments of calculus as follows. The differential of a function f of a real variable x is the function df of two independent real variables x and Δx given by d f = d e f f ′ Δ x. One or both of the arguments may be suppressed, i. e. one may see df or simply df, if y = f, the differential may also be written as dy. The partial differential is therefore ∂ y ∂ x 1 d x 1 involving the partial derivative of y with respect to x1. The total differential is then defined as d y = ∂ y ∂ x 1 Δ x 1 + ⋯ + ∂ y ∂ x n Δ x n. Since, with this definition, d x i = Δ x i, in measurement, the total differential is used in estimating the error Δf of a function f based on the errors Δx, Δy. of the parameters x, y. As they are assumed to be independent, the analysis describes the worst-case scenario, the absolute values of the component errors are used, because after simple computation, the derivative may have a negative sign
Total differential
–
The differential of a function ƒ (x) at a point x 0.
30.
Covariance and contravariance of vectors
–
In multilinear algebra and tensor analysis, covariance and contravariance describe how the quantitative description of certain geometric or physical entities changes with a change of basis. In physics, a basis is sometimes thought of as a set of reference axes, a change of scale on the reference axes corresponds to a change of units in the problem. For instance, in changing scale from meters to centimeters, the components of a velocity vector will multiply by 100. Vectors exhibit this behavior of changing scale inversely to changes in scale to the reference axes, as a result, vectors often have units of distance or distance times some other unit. In contrast, dual vectors typically have units the inverse of distance or the inverse of distance times some other unit, an example of a dual vector is the gradient, which has units of a spatial derivative, or distance−1. The components of dual vectors change in the way as changes to scale of the reference axes. That is, the matrix that transforms the vector of components must be the inverse of the matrix that transforms the basis vectors, the components of vectors are said to be contravariant. In Einstein notation, contravariant components are denoted with upper indices as in v = v i e i, for a dual vector to be basis-independent, the components of the dual vector must co-vary with a change of basis to remain representing the same covector. That is, the components must be transformed by the matrix as the change of basis matrix. The components of vectors are said to be covariant. Examples of covariant vectors generally appear when taking a gradient of a function, in Einstein notation, covariant components are denoted with lower indices as in v = v i e i. Curvilinear coordinate systems, such as cylindrical or spherical coordinates, are used in physical. Tensors are objects in multilinear algebra that can have aspects of both covariance and contravariance, in physics, a vector typically arises as the outcome of a measurement or series of measurements, and is represented as a list of numbers such as. The numbers in the list depend on the choice of coordinate system, for a vector to represent a geometric object, it must be possible to describe how it looks in any other coordinate system. That is to say, the components of the vectors will transform in a way in passing from one coordinate system to another. A contravariant vector has components that transform as the coordinates do under changes of coordinates, including rotation and dilation. The vector itself does not change under these operations, instead, in other words, if the reference axes were rotated in one direction, the component representation of the vector would rotate in exactly the opposite way. Similarly, if the reference axes were stretched in one direction and this important requirement is what distinguishes a contravariant vector from any other triple of physically meaningful quantities
Covariance and contravariance of vectors
–
tangent basis vectors (yellow, left: e 1, e 2, e 3) to the coordinate curves (black),
31.
Del
–
Del, or nabla, is an operator used in mathematics, in particular, in vector calculus, as a vector differential operator, usually represented by the nabla symbol ∇. When applied to a function defined on a domain, it denotes its standard derivative as defined in calculus. When applied to a field, del may denote the gradient of a scalar field, strictly speaking, del is not a specific operator, but rather a convenient mathematical notation for those three operators, that makes many equations easier to write and remember. These formal products do not necessarily commute with other operators or products, del is used as a shorthand form to simplify many long mathematical expressions. It is most commonly used to simplify expressions for the gradient, divergence, curl, directional derivative, and Laplacian. In particular, if a hill is defined as a function over a plane h. The magnitude of the gradient is the value of this steepest slope, when operating on a vector it must be distributed to each component. The Laplacian is ubiquitous throughout modern mathematical physics, appearing for example in Laplaces equation, Poissons equation, the equation, the wave equation. Del can also be applied to a field with the result being a tensor. The tensor derivative of a vector field v → is a 9-term second-rank tensor – that is, a 3×3 matrix – but can be denoted simply as ∇ ⊗ v →, where ⊗ represents the dyadic product. This quantity is equivalent to the transpose of the Jacobian matrix of the field with respect to space. The divergence of the field can then be expressed as the trace of this matrix. Because of the diversity of vector products one application of del already gives rise to three major derivatives, the gradient, divergence, and curl and this is part of the value to be gained in notationally representing this operator as a vector. Though one can often replace del with a vector and obtain an identity, making those identities mnemonic. Whereas a vector is an object with both a magnitude and direction, del has neither a magnitude nor a direction until it operates on a function. For that reason, identities involving del must be derived with care, schey, H. M. Div, Grad, Curl, and All That, An Informal Text on Vector Calculus. Earliest Uses of Symbols of Calculus, NA Digest, Volume 98, Issue 03. A survey of the use of ∇ in vector analysis Tai
Del
–
DCG chart: A simple chart depicting all rules pertaining to second derivatives. D, C, G, L and CC stand for divergence, curl, gradient, Laplacian and curl of curl, respectively. Arrows indicate existence of second derivatives. Blue circle in the middle represents curl of curl, whereas the other two red circles (dashed) mean that DD and GG do not exist.
32.
Linear operator
–
In mathematics, a linear map is a mapping V → W between two modules that preserves the operations of addition and scalar multiplication. An important special case is when V = W, in case the map is called a linear operator, or an endomorphism of V. Sometimes the term linear function has the meaning as linear map. A linear map always maps linear subspaces onto linear subspaces, for instance it maps a plane through the origin to a plane, Linear maps can often be represented as matrices, and simple examples include rotation and reflection linear transformations. In the language of algebra, a linear map is a module homomorphism. In the language of category theory it is a morphism in the category of modules over a given ring, let V and W be vector spaces over the same field K. e. that for any vectors x1. Am ∈ K, the equality holds, f = a 1 f + ⋯ + a m f. It is then necessary to specify which of these fields is being used in the definition of linear. If V and W are considered as spaces over the field K as above, for example, the conjugation of complex numbers is an R-linear map C → C, but it is not C-linear. A linear map from V to K is called a linear functional and these statements generalize to any left-module RM over a ring R without modification, and to any right-module upon reversing of the scalar multiplication. The zero map between two left-modules over the ring is always linear. The identity map on any module is a linear operator, any homothecy centered in the origin of a vector space, v ↦ c v where c is a scalar, is a linear operator. This does not hold in general for modules, where such a map might only be semilinear, for real numbers, the map x ↦ x2 is not linear. Conversely, any map between finite-dimensional vector spaces can be represented in this manner, see the following section. Differentiation defines a map from the space of all differentiable functions to the space of all functions. It also defines an operator on the space of all smooth functions. If V and W are finite-dimensional vector spaces over a field F, then functions that send linear maps f, V → W to dimF × dimF matrices in the way described in the sequel are themselves linear maps. The expected value of a variable is linear, as for random variables X and Y we have E = E + E and E = aE
Linear operator
–
"Linear transformation" redirects here. For fractional linear transformations, see
Möbius transformation.
33.
Smooth function
–
In mathematical analysis, the smoothness of a function is a property measured by the number of derivatives it has which are continuous. A smooth function is a function that has derivatives of all orders everywhere in its domain, differentiability class is a classification of functions according to the properties of their derivatives. Higher order differentiability classes correspond to the existence of more derivatives, consider an open set on the real line and a function f defined on that set with real values. Let k be a non-negative integer, the function f is said to be of class Ck if the derivatives f′, f′′. The function f is said to be of class C∞, or smooth, if it has derivatives of all orders. The function f is said to be of class Cω, or analytic, if f is smooth, Cω is thus strictly contained in C∞. Bump functions are examples of functions in C∞ but not in Cω, to put it differently, the class C0 consists of all continuous functions. The class C1 consists of all differentiable functions whose derivative is continuous, thus, a C1 function is exactly a function whose derivative exists and is of class C0. In particular, Ck is contained in Ck−1 for every k, C∞, the class of infinitely differentiable functions, is the intersection of the sets Ck as k varies over the non-negative integers. The function f = { x if x ≥0,0 if x <0 is continuous, because cos oscillates as x →0, f ’ is not continuous at zero. Therefore, this function is differentiable but not of class C1, the functions f = | x | k +1 where k is even, are continuous and k times differentiable at all x. But at x =0 they are not times differentiable, so they are of class Ck, the exponential function is analytic, so, of class Cω. The trigonometric functions are also analytic wherever they are defined, the function f is an example of a smooth function with compact support. Let n and m be some positive integers, if f is a function from an open subset of Rn with values in Rm, then f has component functions f1. Each of these may or may not have partial derivatives, the classes C∞ and Cω are defined as before. These criteria of differentiability can be applied to the functions of a differential structure. The resulting space is called a Ck manifold, if one wishes to start with a coordinate-independent definition of the class Ck, one may start by considering maps between Banach spaces. A map from one Banach space to another is differentiable at a point if there is a map which approximates it at that point
Smooth function
–
A
bump function is a smooth function with
compact support.
34.
Systems of linear equations
–
In mathematics, a system of linear equations is a collection of two or more linear equations involving the same set of variables. A solution to a system is an assignment of values to the variables such that all the equations are simultaneously satisfied. A solution to the system above is given by x =1 y = −2 z = −2 since it all three equations valid. The word system indicates that the equations are to be considered collectively, in mathematics, the theory of linear systems is the basis and a fundamental part of linear algebra, a subject which is used in most parts of modern mathematics. Computational algorithms for finding the solutions are an important part of linear algebra, and play a prominent role in engineering, physics, chemistry, computer science. A system of equations can often be approximated by a linear system. For solutions in an integral domain like the ring of the integers, or in other structures, other theories have been developed. Integer linear programming is a collection of methods for finding the best integer solution, gröbner basis theory provides algorithms when coefficients and unknowns are polynomials. Also tropical geometry is an example of linear algebra in a more exotic structure, the simplest kind of linear system involves two equations and two variables,2 x +3 y =64 x +9 y =15. One method for solving such a system is as follows, first, solve the top equation for x in terms of y, x =3 −32 y. Now substitute this expression for x into the equation,4 +9 y =15. This results in an equation involving only the variable y. Solving gives y =1, and substituting this back into the equation for x yields x =3 /2. Here x 1, x 2, …, x n are the unknowns, a 11, a 12, …, a m n are the coefficients of the system, and b 1, b 2, …, b m are the constant terms. Often the coefficients and unknowns are real or complex numbers, but integers and rational numbers are seen, as are polynomials. One extremely helpful view is that each unknown is a weight for a vector in a linear combination. X1 + x 2 + ⋯ + x n = This allows all the language, If every vector within that span has exactly one expression as a linear combination of the given left-hand vectors, then any solution is unique. This is important because if we have m independent vectors a solution is guaranteed regardless of the right-hand side, and otherwise not guaranteed
Systems of linear equations
–
A linear system in three variables determines a collection of
planes. The intersection point is the solution.
35.
Jacobian matrix
–
In vector calculus, the Jacobian matrix is the matrix of all first-order partial derivatives of a vector-valued function. When the matrix is a matrix, both the matrix and its determinant are referred to as the Jacobian in literature. Suppose f, ℝn → ℝm is a function takes as input the vector x ∈ ℝn. Then the Jacobian matrix J of f is an m×n matrix, usually defined and arranged as follows, J = = or, component-wise and this matrix, whose entries are functions of x, is also denoted by Df, Jf, and ∂/∂. This linear map is thus the generalization of the notion of derivative. If m = n, the Jacobian matrix is a matrix, and its determinant. It carries important information about the behavior of f. In particular, the f has locally in the neighborhood of a point x an inverse function that is differentiable if. The Jacobian determinant also appears when changing the variables in multiple integrals, if m =1, f is a scalar field and the Jacobian matrix is reduced to a row vector of partial derivatives of f—i. e. the gradient of f. These concepts are named after the mathematician Carl Gustav Jacob Jacobi, the Jacobian generalizes the gradient of a scalar-valued function of multiple variables, which itself generalizes the derivative of a scalar-valued function of a single variable. In other words, the Jacobian for a multivariate function is the gradient. The Jacobian can also be thought of as describing the amount of stretching, rotating or transforming that a transformation imposes locally, for example, if = f is used to transform an image, the Jacobian Jf, describes how the image in the neighborhood of is transformed. If p is a point in ℝn and f is differentiable at p, compare this to a Taylor series for a scalar function of a scalar argument, truncated to first order, f = f + f ′ + o. The Jacobian of the gradient of a function of several variables has a special name, the Hessian matrix. If m=n, then f is a function from ℝn to itself and we can then form its determinant, known as the Jacobian determinant. The Jacobian determinant is occasionally referred to as the Jacobian, the Jacobian determinant at a given point gives important information about the behavior of f near that point. For instance, the differentiable function f is invertible near a point p ∈ ℝn if the Jacobian determinant at p is non-zero. This is the inverse function theorem, furthermore, if the Jacobian determinant at p is positive, then f preserves orientation near p, if it is negative, f reverses orientation
Jacobian matrix
–
A nonlinear map f: R 2 → R 2 sends a small square to a distorted parallelepiped close to the image of the square under the best linear approximation of f near the point.
36.
Linear algebra
–
Linear algebra is the branch of mathematics concerning vector spaces and linear mappings between such spaces. It includes the study of lines, planes, and subspaces, the set of points with coordinates that satisfy a linear equation forms a hyperplane in an n-dimensional space. The conditions under which a set of n hyperplanes intersect in a point is an important focus of study in linear algebra. Such an investigation is initially motivated by a system of linear equations containing several unknowns, such equations are naturally represented using the formalism of matrices and vectors. Linear algebra is central to both pure and applied mathematics, for instance, abstract algebra arises by relaxing the axioms of a vector space, leading to a number of generalizations. Functional analysis studies the infinite-dimensional version of the theory of vector spaces, combined with calculus, linear algebra facilitates the solution of linear systems of differential equations. Because linear algebra is such a theory, nonlinear mathematical models are sometimes approximated by linear models. The study of linear algebra first emerged from the study of determinants, determinants were used by Leibniz in 1693, and subsequently, Gabriel Cramer devised Cramers Rule for solving linear systems in 1750. Later, Gauss further developed the theory of solving linear systems by using Gaussian elimination, the study of matrix algebra first emerged in England in the mid-1800s. In 1844 Hermann Grassmann published his Theory of Extension which included foundational new topics of what is called linear algebra. In 1848, James Joseph Sylvester introduced the term matrix, which is Latin for womb, while studying compositions of linear transformations, Arthur Cayley was led to define matrix multiplication and inverses. Crucially, Cayley used a letter to denote a matrix. In 1882, Hüseyin Tevfik Pasha wrote the book titled Linear Algebra, the first modern and more precise definition of a vector space was introduced by Peano in 1888, by 1900, a theory of linear transformations of finite-dimensional vector spaces had emerged. Linear algebra took its form in the first half of the twentieth century. The use of matrices in quantum mechanics, special relativity, the origin of many of these ideas is discussed in the articles on determinants and Gaussian elimination. Linear algebra first appeared in American graduate textbooks in the 1940s, following work by the School Mathematics Study Group, U. S. high schools asked 12th grade students to do matrix algebra, formerly reserved for college in the 1960s. In France during the 1960s, educators attempted to teach linear algebra through finite-dimensional vector spaces in the first year of secondary school and this was met with a backlash in the 1980s that removed linear algebra from the curriculum. To better suit 21st century applications, such as mining and uncertainty analysis
Linear algebra
–
The three-dimensional
Euclidean space R 3 is a vector space, and lines and planes passing through the
origin are vector subspaces in R 3.
37.
Real number
–
In mathematics, a real number is a value that represents a quantity along a line. The adjective real in this context was introduced in the 17th century by René Descartes, the real numbers include all the rational numbers, such as the integer −5 and the fraction 4/3, and all the irrational numbers, such as √2. Included within the irrationals are the numbers, such as π. Real numbers can be thought of as points on a long line called the number line or real line. Any real number can be determined by a possibly infinite decimal representation, such as that of 8.632, the real line can be thought of as a part of the complex plane, and complex numbers include real numbers. These descriptions of the numbers are not sufficiently rigorous by the modern standards of pure mathematics. All these definitions satisfy the definition and are thus equivalent. The statement that there is no subset of the reals with cardinality greater than ℵ0. Simple fractions were used by the Egyptians around 1000 BC, the Vedic Sulba Sutras in, c.600 BC, around 500 BC, the Greek mathematicians led by Pythagoras realized the need for irrational numbers, in particular the irrationality of the square root of 2. Arabic mathematicians merged the concepts of number and magnitude into a general idea of real numbers. In the 16th century, Simon Stevin created the basis for modern decimal notation, in the 17th century, Descartes introduced the term real to describe roots of a polynomial, distinguishing them from imaginary ones. In the 18th and 19th centuries, there was work on irrational and transcendental numbers. Johann Heinrich Lambert gave the first flawed proof that π cannot be rational, Adrien-Marie Legendre completed the proof, Évariste Galois developed techniques for determining whether a given equation could be solved by radicals, which gave rise to the field of Galois theory. Charles Hermite first proved that e is transcendental, and Ferdinand von Lindemann, lindemanns proof was much simplified by Weierstrass, still further by David Hilbert, and has finally been made elementary by Adolf Hurwitz and Paul Gordan. The development of calculus in the 18th century used the set of real numbers without having defined them cleanly. The first rigorous definition was given by Georg Cantor in 1871, in 1874, he showed that the set of all real numbers is uncountably infinite but the set of all algebraic numbers is countably infinite. Contrary to widely held beliefs, his first method was not his famous diagonal argument, the real number system can be defined axiomatically up to an isomorphism, which is described hereafter. Another possibility is to start from some rigorous axiomatization of Euclidean geometry, from the structuralist point of view all these constructions are on equal footing
Real number
–
A symbol of the set of real numbers (ℝ)
38.
Set (mathematics)
–
In mathematics, a set is a well-defined collection of distinct objects, considered as an object in its own right. For example, the numbers 2,4, and 6 are distinct objects when considered separately, Sets are one of the most fundamental concepts in mathematics. Developed at the end of the 19th century, set theory is now a part of mathematics. In mathematics education, elementary topics such as Venn diagrams are taught at a young age, the German word Menge, rendered as set in English, was coined by Bernard Bolzano in his work The Paradoxes of the Infinite. A set is a collection of distinct objects. The objects that make up a set can be anything, numbers, people, letters of the alphabet, other sets, Sets are conventionally denoted with capital letters. Sets A and B are equal if and only if they have precisely the same elements. Cantors definition turned out to be inadequate, instead, the notion of a set is taken as a notion in axiomatic set theory. There are two ways of describing, or specifying the members of, a set, one way is by intensional definition, using a rule or semantic description, A is the set whose members are the first four positive integers. B is the set of colors of the French flag, the second way is by extension – that is, listing each member of the set. An extensional definition is denoted by enclosing the list of members in curly brackets, one often has the choice of specifying a set either intensionally or extensionally. In the examples above, for instance, A = C and B = D, there are two important points to note about sets. First, in a definition, a set member can be listed two or more times, for example. However, per extensionality, two definitions of sets which differ only in one of the definitions lists set members multiple times, define, in fact. Hence, the set is identical to the set. The second important point is that the order in which the elements of a set are listed is irrelevant and we can illustrate these two important points with an example, = =. For sets with many elements, the enumeration of members can be abbreviated, for instance, the set of the first thousand positive integers may be specified extensionally as, where the ellipsis indicates that the list continues in the obvious way. Ellipses may also be used where sets have infinitely many members, thus the set of positive even numbers can be written as
Set (mathematics)
–
A set of polygons in a
Venn diagram
39.
Real numbers
–
In mathematics, a real number is a value that represents a quantity along a line. The adjective real in this context was introduced in the 17th century by René Descartes, the real numbers include all the rational numbers, such as the integer −5 and the fraction 4/3, and all the irrational numbers, such as √2. Included within the irrationals are the numbers, such as π. Real numbers can be thought of as points on a long line called the number line or real line. Any real number can be determined by a possibly infinite decimal representation, such as that of 8.632, the real line can be thought of as a part of the complex plane, and complex numbers include real numbers. These descriptions of the numbers are not sufficiently rigorous by the modern standards of pure mathematics. All these definitions satisfy the definition and are thus equivalent. The statement that there is no subset of the reals with cardinality greater than ℵ0. Simple fractions were used by the Egyptians around 1000 BC, the Vedic Sulba Sutras in, c.600 BC, around 500 BC, the Greek mathematicians led by Pythagoras realized the need for irrational numbers, in particular the irrationality of the square root of 2. Arabic mathematicians merged the concepts of number and magnitude into a general idea of real numbers. In the 16th century, Simon Stevin created the basis for modern decimal notation, in the 17th century, Descartes introduced the term real to describe roots of a polynomial, distinguishing them from imaginary ones. In the 18th and 19th centuries, there was work on irrational and transcendental numbers. Johann Heinrich Lambert gave the first flawed proof that π cannot be rational, Adrien-Marie Legendre completed the proof, Évariste Galois developed techniques for determining whether a given equation could be solved by radicals, which gave rise to the field of Galois theory. Charles Hermite first proved that e is transcendental, and Ferdinand von Lindemann, lindemanns proof was much simplified by Weierstrass, still further by David Hilbert, and has finally been made elementary by Adolf Hurwitz and Paul Gordan. The development of calculus in the 18th century used the set of real numbers without having defined them cleanly. The first rigorous definition was given by Georg Cantor in 1871, in 1874, he showed that the set of all real numbers is uncountably infinite but the set of all algebraic numbers is countably infinite. Contrary to widely held beliefs, his first method was not his famous diagonal argument, the real number system can be defined axiomatically up to an isomorphism, which is described hereafter. Another possibility is to start from some rigorous axiomatization of Euclidean geometry, from the structuralist point of view all these constructions are on equal footing
Real numbers
–
A symbol of the set of real numbers (ℝ)
40.
Cartesian product
–
In Set theory, a Cartesian product is a mathematical operation that returns a set from multiple sets. That is, for sets A and B, the Cartesian product A × B is the set of all ordered pairs where a ∈ A and b ∈ B, products can be specified using set-builder notation, e. g. A table can be created by taking the Cartesian product of a set of rows, If the Cartesian product rows × columns is taken, the cells of the table contain ordered pairs of the form. More generally, a Cartesian product of n sets, also known as an n-fold Cartesian product, can be represented by an array of n dimensions, an ordered pair is a 2-tuple or couple. The Cartesian product is named after René Descartes, whose formulation of analytic geometry gave rise to the concept, an illustrative example is the standard 52-card deck. The standard playing card ranks form a 13-element set, the card suits form a four-element set. The Cartesian product of these sets returns a 52-element set consisting of 52 ordered pairs, Ranks × Suits returns a set of the form. Suits × Ranks returns a set of the form, both sets are distinct, even disjoint. The main historical example is the Cartesian plane in analytic geometry, usually, such a pairs first and second components are called its x and y coordinates, respectively, cf. picture. The set of all such pairs is thus assigned to the set of all points in the plane, a formal definition of the Cartesian product from set-theoretical principles follows from a definition of ordered pair. The most common definition of ordered pairs, the Kuratowski definition, is =, note that, under this definition, X × Y ⊆ P, where P represents the power set. Therefore, the existence of the Cartesian product of any two sets in ZFC follows from the axioms of pairing, union, power set, let A, B, C, and D be sets. × C ≠ A × If for example A =, then × A = ≠ = A ×, the Cartesian product behaves nicely with respect to intersections, cf. left picture. × = ∩ In most cases the above statement is not true if we replace intersection with union, cf. middle picture. Other properties related with subsets are, if A ⊆ B then A × C ⊆ B × C, the cardinality of a set is the number of elements of the set. For example, defining two sets, A = and B =, both set A and set B consist of two elements each. Their Cartesian product, written as A × B, results in a new set which has the following elements, each element of A is paired with each element of B. Each pair makes up one element of the output set, the number of values in each element of the resulting set is equal to the number of sets whose cartesian product is being taken,2 in this case
Cartesian product
–
Standard 52-card deck
Cartesian product
–
Cartesian product of the sets and
41.
Vector space
–
A vector space is a collection of objects called vectors, which may be added together and multiplied by numbers, called scalars in this context. Scalars are often taken to be numbers, but there are also vector spaces with scalar multiplication by complex numbers, rational numbers. The operations of addition and scalar multiplication must satisfy certain requirements, called axioms. Euclidean vectors are an example of a vector space and they represent physical quantities such as forces, any two forces can be added to yield a third, and the multiplication of a force vector by a real multiplier is another force vector. In the same vein, but in a more geometric sense, Vector spaces are the subject of linear algebra and are well characterized by their dimension, which, roughly speaking, specifies the number of independent directions in the space. Infinite-dimensional vector spaces arise naturally in mathematical analysis, as function spaces and these vector spaces are generally endowed with additional structure, which may be a topology, allowing the consideration of issues of proximity and continuity. Among these topologies, those that are defined by a norm or inner product are commonly used. This is particularly the case of Banach spaces and Hilbert spaces, historically, the first ideas leading to vector spaces can be traced back as far as the 17th centurys analytic geometry, matrices, systems of linear equations, and Euclidean vectors. Today, vector spaces are applied throughout mathematics, science and engineering, furthermore, vector spaces furnish an abstract, coordinate-free way of dealing with geometrical and physical objects such as tensors. This in turn allows the examination of local properties of manifolds by linearization techniques, Vector spaces may be generalized in several ways, leading to more advanced notions in geometry and abstract algebra. The concept of space will first be explained by describing two particular examples, The first example of a vector space consists of arrows in a fixed plane. This is used in physics to describe forces or velocities, given any two such arrows, v and w, the parallelogram spanned by these two arrows contains one diagonal arrow that starts at the origin, too. This new arrow is called the sum of the two arrows and is denoted v + w, when a is negative, av is defined as the arrow pointing in the opposite direction, instead. Such a pair is written as, the sum of two such pairs and multiplication of a pair with a number is defined as follows, + = and a =. The first example above reduces to one if the arrows are represented by the pair of Cartesian coordinates of their end points. A vector space over a field F is a set V together with two operations that satisfy the eight axioms listed below, elements of V are commonly called vectors. Elements of F are commonly called scalars, the second operation, called scalar multiplication takes any scalar a and any vector v and gives another vector av. In this article, vectors are represented in boldface to distinguish them from scalars
Vector space
–
Vector addition and scalar multiplication: a vector v (blue) is added to another vector w (red, upper illustration). Below, w is stretched by a factor of 2, yielding the sum v + 2 w.
42.
Perpendicular
–
In elementary geometry, the property of being perpendicular is the relationship between two lines which meet at a right angle. The property extends to other related geometric objects, a line is said to be perpendicular to another line if the two lines intersect at a right angle. Perpendicularity can be shown to be symmetric, meaning if a first line is perpendicular to a second line, for this reason, we may speak of two lines as being perpendicular without specifying an order. Perpendicularity easily extends to segments and rays, in symbols, A B ¯ ⊥ C D ¯ means line segment AB is perpendicular to line segment CD. A line is said to be perpendicular to an if it is perpendicular to every line in the plane that it intersects. This definition depends on the definition of perpendicularity between lines, two planes in space are said to be perpendicular if the dihedral angle at which they meet is a right angle. Perpendicularity is one instance of the more general mathematical concept of orthogonality, perpendicularity is the orthogonality of classical geometric objects. Thus, in advanced mathematics, the perpendicular is sometimes used to describe much more complicated geometric orthogonality conditions. The word foot is used in connection with perpendiculars. This usage is exemplified in the top diagram, above, the diagram can be in any orientation. The foot is not necessarily at the bottom, step 2, construct circles centered at A and B having equal radius. Let Q and R be the points of intersection of two circles. Step 3, connect Q and R to construct the desired perpendicular PQ, to prove that the PQ is perpendicular to AB, use the SSS congruence theorem for and QPB to conclude that angles OPA and OPB are equal. Then use the SAS congruence theorem for triangles OPA and OPB to conclude that angles POA, to make the perpendicular to the line g at or through the point P using Thales theorem, see the animation at right. The Pythagorean Theorem can be used as the basis of methods of constructing right angles, for example, by counting links, three pieces of chain can be made with lengths in the ratio 3,4,5. These can be out to form a triangle, which will have a right angle opposite its longest side. This method is useful for laying out gardens and fields, where the dimensions are large, the chains can be used repeatedly whenever required. If two lines are perpendicular to a third line, all of the angles formed along the third line are right angles
Perpendicular
–
The segment AB is perpendicular to the segment CD because the two angles it creates (indicated in orange and blue) are each 90 degrees.
43.
Unit vector
–
In mathematics, a unit vector in a normed vector space is a vector of length 1. A unit vector is denoted by a lowercase letter with a circumflex, or hat. The term direction vector is used to describe a unit vector being used to represent spatial direction, two 2D direction vectors, d1 and d2 are illustrated. 2D spatial directions represented this way are equivalent numerically to points on the unit circle, the same construct is used to specify spatial directions in 3D. As illustrated, each direction is equivalent numerically to a point on the unit sphere. The normalized vector or versor û of a vector u is the unit vector in the direction of u, i. e. u ^ = u ∥ u ∥ where ||u|| is the norm of u. The term normalized vector is used as a synonym for unit vector. Unit vectors are often chosen to form the basis of a vector space, every vector in the space may be written as a linear combination of unit vectors. By definition, in a Euclidean space the dot product of two vectors is a scalar value amounting to the cosine of the smaller subtended angle. In three-dimensional Euclidean space, the product of two arbitrary unit vectors is a 3rd vector orthogonal to both of them having length equal to the sine of the smaller subtended angle. Unit vectors may be used to represent the axes of a Cartesian coordinate system and they are often denoted using normal vector notation rather than standard unit vector notation. In most contexts it can be assumed that i, j, the notations, or, with or without hat, are also used, particularly in contexts where i, j, k might lead to confusion with another quantity. When a unit vector in space is expressed, with Cartesian notation, as a combination of i, j, k. The value of each component is equal to the cosine of the angle formed by the vector with the respective basis vector. This is one of the used to describe the orientation of a straight line, segment of straight line, oriented axis. It is important to note that ρ ^ and φ ^ are functions of φ, when differentiating or integrating in cylindrical coordinates, these unit vectors themselves must also be operated on. For a more complete description, see Jacobian matrix, to minimize degeneracy, the polar angle is usually taken 0 ≤ θ ≤180 ∘. It is especially important to note the context of any ordered triplet written in spherical coordinates, here, the American physics convention is used
Unit vector
–
Examples of two 2D direction vectors
44.
Scalar multiplication
–
In mathematics, scalar multiplication is one of the basic operations defining a vector space in linear algebra. In common geometrical contexts, scalar multiplication of a real Euclidean vector by a real number multiplies the magnitude of the vector without changing its direction. The term scalar itself derives from this usage, a scalar is that which scales vectors, scalar multiplication is the multiplication of a vector by a scalar, and must be distinguished from inner product of two vectors. In general, if K is a field and V is a space over K. The result of applying this function to c in K and v in V is denoted cv, here + is addition either in the field or in the vector space, as appropriate, and 0 is the additive identity in either. Juxtaposition indicates either scalar multiplication or the operation in the field. Scalar multiplication may be viewed as a binary operation or as an action of the field on the vector space. A geometric interpretation of scalar multiplication is that it stretches, or contracts, as a special case, V may be taken to be K itself and scalar multiplication may then be taken to be simply the multiplication in the field. When V is Kn, scalar multiplication is equivalent to multiplication of each component with the scalar, the same idea applies if K is a commutative ring and V is a module over K. K can even be a rig, but then there is no additive inverse. If K is not commutative, the operations left scalar multiplication cv. The left scalar multiplication of a matrix A with a scalar λ gives another matrix λA of the size as A. The entries of λA are defined by i j = λ i j, explicitly, similarly, the right scalar multiplication of a matrix A with a scalar λ is defined to be i j = i j λ, explicitly, A λ = λ =. When the underlying ring is commutative, for example, the real or complex number field, however, for matrices over a more general ring that are not commutative, such as the quaternions, they may not be equal. For a real scalar and matrix, λ =2, A =2 A =2 = = =2 = A2. For quaternion scalars and matrices, λ = i, A = i = = ≠ = = i, the non-commutativity of quaternion multiplication prevents the transition of changing ij = +k to ji = −k
Scalar multiplication
–
Scalar multiplication of a vector by a factor of 3 stretches the vector out.
45.
Atlas (topology)
–
In mathematics, particularly topology, one describes a manifold using an atlas. An atlas consists of individual charts that, roughly speaking, describe individual regions of the manifold, if the manifold is the surface of the Earth, then an atlas has its more common meaning. In general, the notion of atlas underlies the definition of a manifold and related structures such as vector bundles. The definition of an atlas depends on the notion of a chart, a chart for a topological space M is a homeomorphism φ from an open subset U of M to an open subset of a Euclidean space. The chart is traditionally recorded as the ordered pair, an atlas for a topological space M is a collection of charts on M such that ⋃ U α = M. If the codomain of each chart is the n-dimensional Euclidean space, a transition map provides a way of comparing two charts of an atlas. To make this comparison, we consider the composition of one chart with the inverse of the other and this composition is not well-defined unless we restrict both charts to the intersection of their domains of definition. To be more precise, suppose that and are two charts for a manifold M such that U α ∩ U β is non-empty. The transition map τ α, β, φ α → φ β is the map defined by τ α, β = φ β ∘ φ α −1. Note that since φ α and φ β are both homeomorphisms, the transition map τ α, β is also a homeomorphism, one often desires more structure on a manifold than simply the topological structure. For example, if one would like an unambiguous notion of differentiation of functions on a manifold, such a manifold is called differentiable. Given a differentiable manifold, one can define the notion of tangent vectors. If each transition function is a map, then the atlas is called a smooth atlas. Alternatively, one could require that the maps have only k continuous derivatives in which case the atlas is said to be C k. Very generally, if each transition function belongs to a pseudo-group G of homeomorphisms of Euclidean space, if the transition maps between charts of an atlas preserve a local trivialization, then the atlas defines the structure of a fibre bundle. Smooth atlas smooth frame Atlas by Rowland, Todd
Atlas (topology)
–
Two charts on a manifold
46.
Differentiable manifold
–
In mathematics, a differentiable manifold is a type of manifold that is locally similar enough to a linear space to allow one to do calculus. Any manifold can be described by a collection of charts, also known as an atlas, one may then apply ideas from calculus while working within the individual charts, since each chart lies within a linear space to which the usual rules of calculus apply. If the charts are suitably compatible, then computations done in one chart are valid in any other differentiable chart, in formal terms, a differentiable manifold is a topological manifold with a globally defined differential structure. Any topological manifold can be given a differential structure locally by using the homeomorphisms in its atlas and the standard differential structure on a linear space. In other words, where the domains of overlap, the coordinates defined by each chart are required to be differentiable with respect to the coordinates defined by every chart in the atlas. The maps that relate the coordinates defined by the charts to one another are called transition maps. Differentiability means different things in different contexts including, continuously differentiable, k times differentiable, smooth, furthermore, the ability to induce such a differential structure on an abstract space allows one to extend the definition of differentiability to spaces without global coordinate systems. A differential structure allows one to define the globally differentiable tangent space, differentiable functions, differentiable manifolds are very important in physics. Special kinds of differentiable manifolds form the basis for theories such as classical mechanics, general relativity. It is possible to develop a calculus for differentiable manifolds and this leads to such mathematical machinery as the exterior calculus. The study of calculus on differentiable manifolds is known as differential geometry, the emergence of differential geometry as a distinct discipline is generally credited to Carl Friedrich Gauss and Bernhard Riemann. Riemann first described manifolds in his famous habilitation lecture before the faculty at Göttingen and these ideas found a key application in Einsteins theory of general relativity and its underlying equivalence principle. A modern definition of a 2-dimensional manifold was given by Hermann Weyl in his 1913 book on Riemann surfaces, the widely accepted general definition of a manifold in terms of an atlas is due to Hassler Whitney. A presentation of a manifold is a second countable Hausdorff space that is locally homeomorphic to a linear space. This formalizes the notion of patching together pieces of a space to make a manifold – the manifold produced also contains the data of how it has been patched together, However, different atlases may produce the same manifold, a manifold does not come with a preferred atlas. And, thus, one defines a manifold to be a space as above with an equivalence class of atlases. There are a number of different types of manifolds, depending on the precise differentiability requirements on the transition functions. Some common examples include the following, a differentiable manifold is a topological manifold equipped with an equivalence class of atlases whose transition maps are all differentiable
Differentiable manifold
–
A nondifferentiable atlas of charts for the globe. The results of calculus may not be compatible between charts if the atlas is not differentiable. In the center and right charts the
Tropic of Cancer is a smooth curve, whereas in the left chart it has a sharp corner. The notion of a differentiable manifold refines that of a manifold by requiring the functions that transform between charts to be differentiable.
47.
Diffeomorphism
–
In mathematics, a diffeomorphism is an isomorphism of smooth manifolds. It is a function that maps one differentiable manifold to another such that both the function and its inverse are smooth. Given two manifolds M and N, a map f, M → N is called a diffeomorphism if it is a bijection and its inverse f−1. If these functions are r times continuously differentiable, f is called a Cr-diffeomorphism, two manifolds M and N are diffeomorphic if there is a diffeomorphism f from M to N. They are Cr diffeomorphic if there is an r times continuously differentiable bijective map between them whose inverse is also r times continuously differentiable, F is said to be a diffeomorphism if it is bijective, smooth and its inverse is smooth. First remark It is essential for V to be connected for the function f to be globally invertible. g. Second remark Since the differential at a point D f x, T x U → T f V is a map, it has a well-defined inverse if. The matrix representation of Dfx is the n × n matrix of partial derivatives whose entry in the i-th row. This so-called Jacobian matrix is used for explicit computations. Third remark Diffeomorphisms are necessarily between manifolds of the same dimension, imagine f going from dimension n to dimension k. If n < k then Dfx could never be surjective, in both cases, therefore, Dfx fails to be a bijection. Fourth remark If Dfx is a bijection at x then f is said to be a local diffeomorphism. Fifth remark Given a smooth map from dimension n to k, if Df is surjective, f is said to be a submersion. Sixth remark A differentiable bijection is not necessarily a diffeomorphism, F = x3, for example, is not a diffeomorphism from R to itself because its derivative vanishes at 0. This is an example of a homeomorphism that is not a diffeomorphism, seventh remark When f is a map between differentiable manifolds, a diffeomorphic f is a stronger condition than a homeomorphic f. For a diffeomorphism, f and its inverse need to be differentiable, for a homeomorphism, f, every diffeomorphism is a homeomorphism, but not every homeomorphism is a diffeomorphism. F, M → N is called a diffeomorphism if, in coordinate charts, more precisely, Pick any cover of M by compatible coordinate charts and do the same for N. Let φ and ψ be charts on, respectively, M and N, with U and V as, respectively, the map ψfφ−1, U → V is then a diffeomorphism as in the definition above, whenever f ⊂ ψ−1
Diffeomorphism
–
Algebraic structure → Group theory
Group theory
48.
Bijection
–
In mathematics, injections, surjections and bijections are classes of functions distinguished by the manner in which arguments and images are related or mapped to each other. A function maps elements from its domain to elements in its codomain, given a function f, X → Y The function is injective if every element of the codomain is mapped to by at most one element of the domain. An injective function is an injection, notationally, ∀ x, x ′ ∈ X, f = f ⇒ x = x ′. Or, equivalently, ∀ x, x ′ ∈ X, x ≠ x ′ ⇒ f ≠ f, the function is surjective if every element of the codomain is mapped to by at least one element of the domain. A surjective function is a surjection, notationally, ∀ y ∈ Y, ∃ x ∈ X such that y = f. The function is bijective if every element of the codomain is mapped to by one element of the domain. A bijective function is a bijection, an injective function need not be surjective, and a surjective function need not be injective. The four possible combinations of injective and surjective features are illustrated in the diagrams to the right, a function is injective if every possible element of the codomain is mapped to by at most one argument. Equivalently, a function is injective if it maps distinct arguments to distinct images, an injective function is an injection. The formal definition is the following, the function f, X → Y is injective iff for all x, x ′ ∈ X, we have f = f ⇒ x = x ′. A function f, X → Y is injective if and only if X is empty or f is left-invertible, here f is the image of f. Since every function is surjective when its codomain is restricted to its image, more precisely, every injection f, X → Y can be factored as a bijection followed by an inclusion as follows. Let fR, X → f be f with codomain restricted to its image, a dual factorisation is given for surjections below. The composition of two injections is again an injection, but if g o f is injective, then it can only be concluded that f is injective, a function is surjective if every possible image is mapped to by at least one argument. In other words, every element in the codomain has non-empty preimage, equivalently, a function is surjective if its image is equal to its codomain. A surjective function is a surjection, the formal definition is the following. The function f, X → Y is surjective iff for all y ∈ Y, there is x ∈ X such that f = y. A function f, X → Y is surjective if and only if it is right-invertible, by collapsing all arguments mapping to a given fixed image, every surjection induces a bijection defined on a quotient of its domain
Bijection
–
Injective and surjective, i.e. bijective
49.
Domain of a function
–
In mathematics, and more specifically in naive set theory, the domain of definition of a function is the set of input or argument values for which the function is defined. That is, the function provides an output or value for each member of the domain, conversely, the set of values the function takes on as output is termed the image of the function, which is sometimes also referred to as the range of the function. For instance, the domain of cosine is the set of all real numbers, if the domain of a function is a subset of the real numbers and the function is represented in a Cartesian coordinate system, then the domain is represented on the X-axis. Given a function f, X→Y, the set X is the domain of f, in the expression f, x is the argument and f is the value. One can think of an argument as a member of the domain that is chosen as an input to the function, the image of f is the set of all values assumed by f for all possible x, this is the set. The image of f can be the set as the codomain or it can be a proper subset of it. It is, in general, smaller than the codomain, it is the whole codomain if, a well-defined function must map every element of its domain to an element of its codomain. For example, the function f defined by f =1 / x has no value for f, thus, the set of all real numbers, R, cannot be its domain. In cases like this, the function is defined on R\ or the gap is plugged by explicitly defining f. If we extend the definition of f to f = {1 / x x ≠00 x =0 then f is defined for all real numbers, any function can be restricted to a subset of its domain. The restriction of g, A → B to S, where S ⊆ A, is written g |S, S → B. The natural domain of a function is the set of values for which the function is defined, typically within the reals. For instance the natural domain of square root is the non-negative reals when considered as a real number function, when considering a natural domain, the set of possible values of the function is typically called its range. There are two meanings in current mathematical usage for the notion of the domain of a partial function from X to Y, i. e. a function from a subset X of X to Y. Most mathematicians, including recursion theorists, use the domain of f for the set X of all values x such that f is defined. But some, particularly category theorists, consider the domain to be X, in category theory one deals with morphisms instead of functions. Morphisms are arrows from one object to another, the domain of any morphism is the object from which an arrow starts. In this context, many set theoretic ideas about domains must be abandoned or at least formulated more abstractly, for example, the notion of restricting a morphism to a subset of its domain must be modified
Domain of a function
–
Illustration showing f, a function from pink domain X to blue and yellow codomain Y. The smaller yellow oval inside Y is the
image of f. Either the image or the codomain also sometimes is called the
range of f.
50.
Jacobian matrix and determinant
–
In vector calculus, the Jacobian matrix is the matrix of all first-order partial derivatives of a vector-valued function. When the matrix is a matrix, both the matrix and its determinant are referred to as the Jacobian in literature. Suppose f, ℝn → ℝm is a function takes as input the vector x ∈ ℝn. Then the Jacobian matrix J of f is an m×n matrix, usually defined and arranged as follows, J = = or, component-wise and this matrix, whose entries are functions of x, is also denoted by Df, Jf, and ∂/∂. This linear map is thus the generalization of the notion of derivative. If m = n, the Jacobian matrix is a matrix, and its determinant. It carries important information about the behavior of f. In particular, the f has locally in the neighborhood of a point x an inverse function that is differentiable if. The Jacobian determinant also appears when changing the variables in multiple integrals, if m =1, f is a scalar field and the Jacobian matrix is reduced to a row vector of partial derivatives of f—i. e. the gradient of f. These concepts are named after the mathematician Carl Gustav Jacob Jacobi, the Jacobian generalizes the gradient of a scalar-valued function of multiple variables, which itself generalizes the derivative of a scalar-valued function of a single variable. In other words, the Jacobian for a multivariate function is the gradient. The Jacobian can also be thought of as describing the amount of stretching, rotating or transforming that a transformation imposes locally, for example, if = f is used to transform an image, the Jacobian Jf, describes how the image in the neighborhood of is transformed. If p is a point in ℝn and f is differentiable at p, compare this to a Taylor series for a scalar function of a scalar argument, truncated to first order, f = f + f ′ + o. The Jacobian of the gradient of a function of several variables has a special name, the Hessian matrix. If m=n, then f is a function from ℝn to itself and we can then form its determinant, known as the Jacobian determinant. The Jacobian determinant is occasionally referred to as the Jacobian, the Jacobian determinant at a given point gives important information about the behavior of f near that point. For instance, the differentiable function f is invertible near a point p ∈ ℝn if the Jacobian determinant at p is non-zero. This is the inverse function theorem, furthermore, if the Jacobian determinant at p is positive, then f preserves orientation near p, if it is negative, f reverses orientation
Jacobian matrix and determinant
–
A nonlinear map f: R 2 → R 2 sends a small square to a distorted parallelepiped close to the image of the square under the best linear approximation of f near the point.
51.
Mechanics
–
Mechanics is an area of science concerned with the behaviour of physical bodies when subjected to forces or displacements, and the subsequent effects of the bodies on their environment. The scientific discipline has its origins in Ancient Greece with the writings of Aristotle, during the early modern period, scientists such as Khayaam, Galileo, Kepler, and Newton, laid the foundation for what is now known as classical mechanics. It is a branch of physics that deals with particles that are either at rest or are moving with velocities significantly less than the speed of light. It can also be defined as a branch of science which deals with the motion of, historically, classical mechanics came first, while quantum mechanics is a comparatively recent invention. Classical mechanics originated with Isaac Newtons laws of motion in Philosophiæ Naturalis Principia Mathematica, both are commonly held to constitute the most certain knowledge that exists about physical nature. Classical mechanics has especially often been viewed as a model for other so-called exact sciences, essential in this respect is the relentless use of mathematics in theories, as well as the decisive role played by experiment in generating and testing them. Quantum mechanics is of a scope, as it encompasses classical mechanics as a sub-discipline which applies under certain restricted circumstances. According to the principle, there is no contradiction or conflict between the two subjects, each simply pertains to specific situations. The correspondence principle states that the behavior of systems described by quantum theories reproduces classical physics in the limit of quantum numbers. Quantum mechanics has superseded classical mechanics at the level and is indispensable for the explanation and prediction of processes at the molecular, atomic. However, for macroscopic processes classical mechanics is able to solve problems which are difficult in quantum mechanics and hence remains useful. Modern descriptions of such behavior begin with a definition of such quantities as displacement, time, velocity, acceleration, mass. Until about 400 years ago, however, motion was explained from a different point of view. He showed that the speed of falling objects increases steadily during the time of their fall and this acceleration is the same for heavy objects as for light ones, provided air friction is discounted. The English mathematician and physicist Isaac Newton improved this analysis by defining force and mass, for objects traveling at speeds close to the speed of light, Newton’s laws were superseded by Albert Einstein’s theory of relativity. For atomic and subatomic particles, Newton’s laws were superseded by quantum theory, for everyday phenomena, however, Newton’s three laws of motion remain the cornerstone of dynamics, which is the study of what causes motion. In analogy to the distinction between quantum and classical mechanics, Einsteins general and special theories of relativity have expanded the scope of Newton, the differences between relativistic and Newtonian mechanics become significant and even dominant as the velocity of a massive body approaches the speed of light. Relativistic corrections are also needed for quantum mechanics, although general relativity has not been integrated, the two theories remain incompatible, a hurdle which must be overcome in developing a theory of everything
Mechanics
–
Arabic Machine Manuscript. Unknown date (at a guess: 16th to 19th centuries).
52.
Tensor product
–
The tensor product space is thus the freest such vector space, in the sense of having the least constraints. The tensor product of vector spaces has dimension equal to the product of the dimensions of the two factors, dim = dim V dim W. In particular, this distinguishes the tensor product from the direct sum vector space, in each such case the tensor product is characterized by a similar universal property, it is the freest bilinear operation. The general concept of a product is captured by monoidal categories, that is. The ⊠ variant of ⊗ is used in control theory, the tensor product of two vector spaces V and W over a field K is another vector space over K. It is denoted V ⊗K W, or V ⊗ W when the underlying field K is understood and this product operation ⊗, V × W → V ⊗ W is quickly verified to be bilinear. As an example, letting V = W = R3 and considering the standard set for each, the tensor product V ⊗ W is spanned by the nine basis vectors. For vectors v =, w = ∈ R3, the product v ⊗ w = x ^ ⊗ x ^ +2 y ^ ⊗ x ^ +3 z ^ ⊗ x ^. The above definition relies on a choice of basis, which can not be done canonically for a vector space. However, any two choices of basis lead to isomorphic tensor product spaces, alternatively, the tensor product may be defined in an expressly basis-independent manner as a quotient space of a free vector space over V × W. It is a space over K with the usual addition. It has a basis parameterized by S, because of this explicit expression, an element of F is often called a formal sum of symbols in S. By construction, the dimension of the vector space F equals the cardinality of the set S, let us first consider a special case, let us say V, W are free vector spaces for the sets S, T respectively. In this special case, the product is defined as F ⊗ F = F. In most typical cases, any space can be immediately understood as the free vector space for some set. However, there is also a way of constructing the tensor product directly from V, W. In other words, the operations are well-defined, in this way, because it is a quotient of the free vector space by the subspace generated by the relations, it is the freest such vector space. For this reason, the tensor product V ⊗ W can also be characterised by a universal property, the following expression explicitly gives the subspace N, N = span
Tensor product
–
This
commutative diagram presents the universal property of tensor product.
53.
Orthogonal
–
The concept of orthogonality has been broadly generalized in mathematics, as well as in areas such as chemistry, and engineering. The word comes from the Greek ὀρθός, meaning upright, and γωνία, the ancient Greek ὀρθογώνιον orthogōnion and classical Latin orthogonium originally denoted a rectangle. Later, they came to mean a right triangle, in the 12th century, the post-classical Latin word orthogonalis came to mean a right angle or something related to a right angle. In geometry, two Euclidean vectors are orthogonal if they are perpendicular, i. e. they form a right angle, two vectors, x and y, in an inner product space, V, are orthogonal if their inner product ⟨ x, y ⟩ is zero. This relationship is denoted x ⊥ y, two vector subspaces, A and B, of an inner product space, V, are called orthogonal subspaces if each vector in A is orthogonal to each vector in B. The largest subspace of V that is orthogonal to a subspace is its orthogonal complement. Given a module M and its dual M∗, an element m′ of M∗, two sets S′ ⊆ M∗ and S ⊆ M are orthogonal if each element of S′ is orthogonal to each element of S. A term rewriting system is said to be if it is left-linear and is non-ambiguous. Orthogonal term rewriting systems are confluent, a set of vectors in an inner product space is called pairwise orthogonal if each pairing of them is orthogonal. Such a set is called an orthogonal set, nonzero pairwise orthogonal vectors are always linearly independent. In certain cases, the normal is used to mean orthogonal. For example, the y-axis is normal to the curve y = x2 at the origin, however, normal may also refer to the magnitude of a vector. In particular, a set is called if it is an orthogonal set of unit vectors. As a result, use of the normal to mean orthogonal is often avoided. The word normal also has a different meaning in probability and statistics, a vector space with a bilinear form generalizes the case of an inner product. When the bilinear form applied to two results in zero, then they are orthogonal. The case of a pseudo-Euclidean plane uses the term hyperbolic orthogonality, in the diagram, axes x′ and t′ are hyperbolic-orthogonal for any given ϕ. In 2-D or higher-dimensional Euclidean space, two vectors are orthogonal if and only if their dot product is zero, i. e. they make an angle of 90°, hence orthogonality of vectors is an extension of the concept of perpendicular vectors into higher-dimensional spaces
Orthogonal
–
The line segments AB and CD are orthogonal to each other.
54.
Levi-Civita symbol
–
It is named after the Italian mathematician and physicist Tullio Levi-Civita. Other names include the symbol, antisymmetric symbol, or alternating symbol. The standard letters to denote the Levi-Civita symbol are the Greek lower case epsilon ε or ϵ, or less commonly the Latin lower case e. Index notation allows one to display permutations in a way compatible with tensor analysis, ε i 1 i 2 … i n where each index i1, i2, …, there are nn indexed values of εi1i2…in, which can be arranged into an n-dimensional array. The key definitive property of the symbol is total antisymmetry in all the indices, when any two indices are interchanged, equal or not, the symbol is negated, ε … i p … i q … = − ε … i q … i p …. If any two indices are equal, the symbol is zero, the value ε12…n must be defined, else the particular values of the symbol for all permutations are indeterminate. Most authors choose ε12…n = +1, which means the Levi-Civita symbol equals the sign of a permutation when the indices are all unequal and this choice is used throughout this article. The values of the Levi-Civita symbol are independent of any metric tensor, also, the specific term symbol emphasizes that it is not a tensor because of how it transforms between coordinate systems, however it can be interpreted as a tensor density. The Levi-Civita symbol allows the determinant of a matrix. The three- and higher-dimensional Levi-Civita symbols are used more commonly, in three dimensions only, the cyclic permutations of are all even permutations, similarly the anticyclic permutations are all odd permutations. This means in 3d it is sufficient to take cyclic or anticyclic permutations of and easily obtain all the even or odd permutations. Analogous to 2-dimensional matrices, the values of the 3-dimensional Levi-Civita symbol can be arranged into a 3 ×3 ×3 array, the formula is valid for all index values, and for any n. However, computing the formula above naively is O in time complexity, a tensor whose components in an orthonormal basis are given by the Levi-Civita symbol is sometimes called a permutation tensor. It is actually a pseudotensor because under a transformation of Jacobian determinant −1. As the Levi-Civita symbol is a pseudotensor, the result of taking a product is a pseudovector. Under a general coordinate change, the components of the permutation tensor are multiplied by the Jacobian of the transformation matrix. This implies that in coordinate frames different from the one in which the tensor was defined, if the frame is orthonormal, the factor will be ±1 depending on whether the orientation of the frame is the same or not. In index-free tensor notation, the Levi-Civita symbol is replaced by the concept of the Hodge dual, thus, one could write ε i j … k = ε i j … k
Levi-Civita symbol
–
For the indices (i, j, k) in ε ijk, the values 1, 2, 3 occurring in the cyclic order (1,2,3) (yellow) correspond to ε = +1, while occurring in the reverse cyclic order (red) correspond to ε = −1, otherwise ε = 0.
55.
Permutation symbol
–
It is named after the Italian mathematician and physicist Tullio Levi-Civita. Other names include the symbol, antisymmetric symbol, or alternating symbol. The standard letters to denote the Levi-Civita symbol are the Greek lower case epsilon ε or ϵ, or less commonly the Latin lower case e. Index notation allows one to display permutations in a way compatible with tensor analysis, ε i 1 i 2 … i n where each index i1, i2, …, there are nn indexed values of εi1i2…in, which can be arranged into an n-dimensional array. The key definitive property of the symbol is total antisymmetry in all the indices, when any two indices are interchanged, equal or not, the symbol is negated, ε … i p … i q … = − ε … i q … i p …. If any two indices are equal, the symbol is zero, the value ε12…n must be defined, else the particular values of the symbol for all permutations are indeterminate. Most authors choose ε12…n = +1, which means the Levi-Civita symbol equals the sign of a permutation when the indices are all unequal and this choice is used throughout this article. The values of the Levi-Civita symbol are independent of any metric tensor, also, the specific term symbol emphasizes that it is not a tensor because of how it transforms between coordinate systems, however it can be interpreted as a tensor density. The Levi-Civita symbol allows the determinant of a matrix. The three- and higher-dimensional Levi-Civita symbols are used more commonly, in three dimensions only, the cyclic permutations of are all even permutations, similarly the anticyclic permutations are all odd permutations. This means in 3d it is sufficient to take cyclic or anticyclic permutations of and easily obtain all the even or odd permutations. Analogous to 2-dimensional matrices, the values of the 3-dimensional Levi-Civita symbol can be arranged into a 3 ×3 ×3 array, the formula is valid for all index values, and for any n. However, computing the formula above naively is O in time complexity, a tensor whose components in an orthonormal basis are given by the Levi-Civita symbol is sometimes called a permutation tensor. It is actually a pseudotensor because under a transformation of Jacobian determinant −1. As the Levi-Civita symbol is a pseudotensor, the result of taking a product is a pseudovector. Under a general coordinate change, the components of the permutation tensor are multiplied by the Jacobian of the transformation matrix. This implies that in coordinate frames different from the one in which the tensor was defined, if the frame is orthonormal, the factor will be ±1 depending on whether the orientation of the frame is the same or not. In index-free tensor notation, the Levi-Civita symbol is replaced by the concept of the Hodge dual, thus, one could write ε i j … k = ε i j … k
Permutation symbol
–
For the indices (i, j, k) in ε ijk, the values 1, 2, 3 occurring in the cyclic order (1,2,3) (yellow) correspond to ε = +1, while occurring in the reverse cyclic order (red) correspond to ε = −1, otherwise ε = 0.
56.
Surface integral
–
In mathematics, a surface integral is a generalization of multiple integrals to integration over surfaces. It can be thought of as the double integral analog of the line integral, given a surface, one may integrate over its scalar fields, and vector fields. Surface integrals have applications in physics, particularly with the theories of classical electromagnetism, let such a parameterization be x, where varies in some region T in the plane. The surface integral can also be expressed in the equivalent form ∬ S f d Σ = ∬ T f g d s d t where g is the determinant of the first fundamental form of the mapping x. So that ∂ r ∂ x =, and ∂ r ∂ y =, one can recognize the vector in the second line above as the normal vector to the surface. Note that because of the presence of the product, the above formulas only work for surfaces embedded in three-dimensional space. This can be seen as integrating a Riemannian volume form on the parameterized surface, consider a vector field v on S, that is, for each x in S, v is a vector. The surface integral can be defined according to the definition of the surface integral of a scalar field. This applies for example in the expression of the field at some fixed point due to an electrically charged surface. Alternatively, if we integrate the normal component of the vector field, imagine that we have a fluid flowing through S, such that v determines the velocity of the fluid at x. The flux is defined as the quantity of flowing through S per unit time. This illustration implies that if the field is tangent to S at each point, then the flux is zero, because the fluid just flows in parallel to S. This also implies that if v does not just flow along S and we find the formula ∬ S v ⋅ d Σ = ∬ S d Σ = ∬ T ∥ ∥ d s d t = ∬ T v ⋅ d s d t. The cross product on the side of this expression is a surface normal determined by the parametrization. This formula defines the integral on the left and we may also interpret this as a special case of integrating 2-forms, where we identify the vector field with a 1-form, and then integrate its Hodge dual over the surface. The transformation of the forms are similar. Then, the integral of f on S is given by ∬ D d s d t where ∂ x ∂ s × ∂ x ∂ t = is the surface element normal to S. Let us note that the integral of this 2-form is the same as the surface integral of the vector field which has as components f x, f y and f z
Surface integral
–
The definition of surface integral relies on splitting the surface into small surface elements.
57.
Volume integral
–
In mathematics—in particular, in multivariable calculus—a volume integral refers to an integral over a 3-dimensional domain, that is, it is a special case of multiple integrals. Volume integrals are especially important in physics for many applications, for example and it can also mean a triple integral within a region D in R3 of a function f, and is usually written as, ∭ D f d x d y d z. A volume integral in cylindrical coordinates is ∭ D f ρ d ρ d φ d z, and this is rather trivial however, and a volume integral is far more powerful. Multiple integral, Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4 Weisstein, Eric W
Volume integral
–
Volume element in spherical coordinates
58.
Integration (mathematics)
–
In mathematics, an integral assigns numbers to functions in a way that can describe displacement, area, volume, and other concepts that arise by combining infinitesimal data. Integration is one of the two operations of calculus, with its inverse, differentiation, being the other. The area above the x-axis adds to the total and that below the x-axis subtracts from the total, roughly speaking, the operation of integration is the reverse of differentiation. For this reason, the integral may also refer to the related notion of the antiderivative. In this case, it is called an integral and is written. The integrals discussed in this article are those termed definite integrals, a rigorous mathematical definition of the integral was given by Bernhard Riemann. It is based on a procedure which approximates the area of a curvilinear region by breaking the region into thin vertical slabs. A line integral is defined for functions of two or three variables, and the interval of integration is replaced by a curve connecting two points on the plane or in the space. In a surface integral, the curve is replaced by a piece of a surface in the three-dimensional space and this method was further developed and employed by Archimedes in the 3rd century BC and used to calculate areas for parabolas and an approximation to the area of a circle. A similar method was developed in China around the 3rd century AD by Liu Hui. This method was used in the 5th century by Chinese father-and-son mathematicians Zu Chongzhi. The next significant advances in integral calculus did not begin to appear until the 17th century, further steps were made in the early 17th century by Barrow and Torricelli, who provided the first hints of a connection between integration and differentiation. Barrow provided the first proof of the theorem of calculus. Wallis generalized Cavalieris method, computing integrals of x to a power, including negative powers. The major advance in integration came in the 17th century with the independent discovery of the theorem of calculus by Newton. The theorem demonstrates a connection between integration and differentiation and this connection, combined with the comparative ease of differentiation, can be exploited to calculate integrals. In particular, the theorem of calculus allows one to solve a much broader class of problems. Equal in importance is the mathematical framework that both Newton and Leibniz developed
Integration (mathematics)
–
A definite integral of a function can be represented as the signed area of the region bounded by its graph.
59.
Albert Einstein
–
Albert Einstein was a German-born theoretical physicist. He developed the theory of relativity, one of the two pillars of modern physics, Einsteins work is also known for its influence on the philosophy of science. Einstein is best known in popular culture for his mass–energy equivalence formula E = mc2, near the beginning of his career, Einstein thought that Newtonian mechanics was no longer enough to reconcile the laws of classical mechanics with the laws of the electromagnetic field. This led him to develop his theory of relativity during his time at the Swiss Patent Office in Bern. Briefly before, he aquired the Swiss citizenship in 1901, which he kept for his whole life and he continued to deal with problems of statistical mechanics and quantum theory, which led to his explanations of particle theory and the motion of molecules. He also investigated the properties of light which laid the foundation of the photon theory of light. In 1917, Einstein applied the theory of relativity to model the large-scale structure of the universe. He was visiting the United States when Adolf Hitler came to power in 1933 and, being Jewish, did not go back to Germany and he settled in the United States, becoming an American citizen in 1940. This eventually led to what would become the Manhattan Project, Einstein supported defending the Allied forces, but generally denounced the idea of using the newly discovered nuclear fission as a weapon. Later, with the British philosopher Bertrand Russell, Einstein signed the Russell–Einstein Manifesto, Einstein was affiliated with the Institute for Advanced Study in Princeton, New Jersey, until his death in 1955. Einstein published more than 300 scientific papers along with over 150 non-scientific works, on 5 December 2014, universities and archives announced the release of Einsteins papers, comprising more than 30,000 unique documents. Einsteins intellectual achievements and originality have made the word Einstein synonymous with genius, Albert Einstein was born in Ulm, in the Kingdom of Württemberg in the German Empire, on 14 March 1879. His parents were Hermann Einstein, a salesman and engineer, the Einsteins were non-observant Ashkenazi Jews, and Albert attended a Catholic elementary school in Munich from the age of 5 for three years. At the age of 8, he was transferred to the Luitpold Gymnasium, the loss forced the sale of the Munich factory. In search of business, the Einstein family moved to Italy, first to Milan, when the family moved to Pavia, Einstein stayed in Munich to finish his studies at the Luitpold Gymnasium. His father intended for him to electrical engineering, but Einstein clashed with authorities and resented the schools regimen. He later wrote that the spirit of learning and creative thought was lost in strict rote learning, at the end of December 1894, he travelled to Italy to join his family in Pavia, convincing the school to let him go by using a doctors note. During his time in Italy he wrote an essay with the title On the Investigation of the State of the Ether in a Magnetic Field
Albert Einstein
–
Albert Einstein in 1921
Albert Einstein
–
Einstein at the age of 3 in 1882
Albert Einstein
–
Albert Einstein in 1893 (age 14)
Albert Einstein
–
Einstein's matriculation certificate at the age of 17, showing his final grades from the Argovian cantonal school (Aargauische Kantonsschule, on a scale of 1–6, with 6 being the highest possible mark)
60.
Manifold
–
In mathematics, a manifold is a topological space that locally resembles Euclidean space near each point. More precisely, each point of a manifold has a neighbourhood that is homeomorphic to the Euclidean space of dimension n. One-dimensional manifolds include lines and circles, but not figure eights, two-dimensional manifolds are also called surfaces. Although a manifold locally resembles Euclidean space, globally it may not, for example, the surface of the sphere is not a Euclidean space, but in a region it can be charted by means of map projections of the region into the Euclidean plane. When a region appears in two neighbouring charts, the two representations do not coincide exactly and a transformation is needed to pass from one to the other, Manifolds naturally arise as solution sets of systems of equations and as graphs of functions. One important class of manifolds is the class of differentiable manifolds and this differentiable structure allows calculus to be done on manifolds. A Riemannian metric on a manifold allows distances and angles to be measured, symplectic manifolds serve as the phase spaces in the Hamiltonian formalism of classical mechanics, while four-dimensional Lorentzian manifolds model spacetime in general relativity. After a line, the circle is the simplest example of a topological manifold, Topology ignores bending, so a small piece of a circle is treated exactly the same as a small piece of a line. Consider, for instance, the top part of the circle, x2 + y2 =1. Any point of this arc can be described by its x-coordinate. So, projection onto the first coordinate is a continuous, and invertible, mapping from the arc to the open interval. Such functions along with the regions they map are called charts. Similarly, there are charts for the bottom, left, and right parts of the circle, together, these parts cover the whole circle and the four charts form an atlas for the circle. The top and right charts, χtop and χright respectively, overlap in their domain, Each map this part into the interval, though differently. Let a be any number in, then, T = χ r i g h t = χ r i g h t =1 − a 2 Such a function is called a transition map. The top, bottom, left, and right charts show that the circle is a manifold, charts need not be geometric projections, and the number of charts is a matter of some choice. These two charts provide a second atlas for the circle, with t =1 s Each chart omits a single point, either for s or for t and it can be proved that it is not possible to cover the full circle with a single chart. Viewed using calculus, the transition function T is simply a function between open intervals, which gives a meaning to the statement that T is differentiable
Manifold
–
The surface of the Earth requires (at least) two charts to include every point. Here the
globe is decomposed into charts around the
North and
South Poles.
Manifold
–
The
real projective plane is a two-dimensional manifold that cannot be realized in three dimensions without self-intersection, shown here as
Boy's surface.
61.
Solid mechanics
–
Solid mechanics is fundamental for civil, aerospace, nuclear, and mechanical engineering, for geology, and for many branches of physics such as materials science. It has specific applications in other areas, such as understanding the anatomy of living beings. One of the most common applications of solid mechanics is the Euler-Bernoulli beam equation. Solid mechanics extensively uses tensors to describe stresses, strains, as shown in the following table, solid mechanics inhabits a central place within continuum mechanics. The field of rheology presents an overlap between solid and fluid mechanics, a material has a rest shape and its shape departs away from the rest shape due to stress. The amount of departure from rest shape is called deformation, the proportion of deformation to original size is called strain and this region of deformation is known as the linearly elastic region. It is most common for analysts in solid mechanics to use linear material models, however, real materials often exhibit non-linear behavior. As new materials are used and old ones are pushed to their limits, There are four basic models that describe how a solid responds to an applied stress, Elastically – When an applied stress is removed, the material returns to its undeformed state. Linearly elastic materials, those that deform proportionally to the applied load and this implies that the material response has time-dependence. Plastically – Materials that behave elastically generally do so when the stress is less than a yield value. When the stress is greater than the stress, the material behaves plastically. That is, deformation occurs after yield is permanent. Thermoelastically - There is coupling of mechanical with thermal responses, in general, thermoelasticity is concerned with elastic solids under conditions that are neither isothermal nor adiabatic. The simplest theory involves the Fouriers law of conduction, as opposed to advanced theories with physically more realistic models. This theorem includes the method of least work as a special case 1874,1922, Timoshenko corrects the Euler-Bernoulli beam equation 1936, Hardy Cross publication of the moment distribution method, an important innovation in the design of continuous frames. Martin, and L. J. Applied mechanics Materials science Continuum mechanics Fracture mechanics L. D, landau, E. M. Lifshitz, Course of Theoretical Physics, Theory of Elasticity Butterworth-Heinemann, ISBN 0-7506-2633-X J. E. Marsden, T. J. Hughes, Mathematical Foundations of Elasticity, Dover, ISBN 0-486-67865-2 P. C. Chou, N. J. Pagano, Elasticity, Tensor, Dyadic, goodier, Theory of elasticity, 3d ed
Solid mechanics
–
Continuum mechanics
62.
Metamaterials
–
A metamaterial is a material engineered to have a property that is not found in nature. They are made from assemblies of multiple elements fashioned from materials such as metals or plastics. The materials are arranged in repeating patterns, at scales that are smaller than the wavelengths of the phenomena they influence. Metamaterials derive their properties not from the properties of the base materials, appropriately designed metamaterials can affect waves of electromagnetic radiation or sound in a manner not observed in bulk materials. Those that exhibit a negative index of refraction for particular wavelengths have attracted significant research and these materials are known as negative-index metamaterials. Metamaterials offer the potential to create superlenses, such a lens could allow imaging below the diffraction limit that is the minimum resolution that can be achieved by conventional glass lenses. A form of invisibility was demonstrated using gradient-index materials, acoustic and seismic metamaterials are also research areas. Explorations of artificial materials for manipulating electromagnetic waves began at the end of the 19th century, some of the earliest structures that may be considered metamaterials were studied by Jagadish Chandra Bose, who in 1898 researched substances with chiral properties. Karl Ferdinand Lindman studied wave interaction with metallic helices as artificial chiral media in the twentieth century. Winston E. Kock developed materials that had characteristics to metamaterials in the late 1940s. In the 1950s and 1960s, artificial dielectrics were studied for lightweight microwave antennas, microwave radar absorbers were researched in the 1980s and 1990s as applications for artificial chiral media. Negative-index materials were first described theoretically by Victor Veselago in 1967 and he proved that such materials could transmit light. He showed that the phase velocity could be made anti-parallel to the direction of Poynting vector and this is contrary to wave propagation in naturally occurring materials. John Pendry was the first to identify a practical way to make a left-handed metamaterial, such a material allows an electromagnetic wave to convey energy against its phase velocity. Pendrys idea was that metallic wires aligned along the direction of a wave could provide negative permittivity, natural materials display negative permittivity, the challenge was achieving negative permeability. In 1999 Pendry demonstrated that a ring with its axis placed along the direction of wave propagation could do so. In the same paper, he showed that an array of wires. Pendry also proposed a related negative-permeability design, the Swiss roll, in 2000, Smith et al. reported the experimental demonstration of functioning electromagnetic metamaterials by horizontally stacking, periodically, split-ring resonators and thin wire structures
Metamaterials
–
Negative index metamaterial array configuration, which was constructed of copper
split-ring resonators and wires mounted on interlocking sheets of fiberglass circuit board. The total array consists of 3 by 20×20 unit cells with overall dimensions of 10×100×100 mm.
63.
Chain rule
–
In calculus, the chain rule is a formula for computing the derivative of the composition of two or more functions. This can be more explicitly in terms of the variable. Let F = f ∘ g, or equivalently, F = f for all x, then one can also write F ′ = f ′ g ′. The chain rule may be written, in Leibnizs notation, in the following way. If a variable z depends on the y, which itself depends on the variable x, so that y and z are therefore dependent variables, then z, via the intermediate variable of y. The chain rule states, d z d x = d z d y ⋅ d y d x. In integration, the counterpart to the rule is the substitution rule. The chain rule seems to have first been used by Leibniz and he used it to calculate the derivative of a + b z + c z 2 as the composite of the square root function and the function a + b z + c z 2. He first mentioned it in a 1676 memoir, the common notation of chain rule is due to Leibniz. LHôpital uses the chain rule implicitly in his Analyse des infiniment petits, the chain rule does not appear in any of Leonhard Eulers analysis books, even though they were written over a hundred years after Leibnizs discovery. Suppose that a skydiver jumps from an aircraft, assume that t seconds after his jump, his height above sea level in meters is given by g =4000 −4. 9t2. One model for the pressure at a height h is f =101325 e−0. 0001h. These two equations can be differentiated and combined in ways to produce the following data, g′ = −9. 8t is the velocity of the skydiver at time t. F′ = −10. 1325e−0. 0001h is the rate of change in pressure with respect to height at the height h and is proportional to the buoyant force on the skydiver at h meters above sea level. Is the atmospheric pressure the skydiver experiences t seconds after his jump, ′ is the rate of change in atmospheric pressure with respect to time at t seconds after the skydivers jump and is proportional to the buoyant force on the skydiver at t seconds after his jump. The chain rule gives a method for computing ′ in terms of f′, while it is always possible to directly apply the definition of the derivative to compute the derivative of a composite function, this is usually very difficult. The utility of the rule is that it turns a complicated derivative into several easy derivatives. The chain rule states that, under conditions, ′ = f ′ ⋅ g ′
Chain rule
64.
Tensor derivative (continuum mechanics)
–
The derivatives of scalars, vectors, and second-order tensors with respect to second-order tensors are of considerable use in continuum mechanics. These derivatives are used in the theories of nonlinear elasticity and plasticity, the directional derivative provides a systematic way of finding these derivatives. The definitions of directional derivatives for various situations are given below and it is assumed that the functions are sufficiently smooth that derivatives can be taken. Let f be a real valued function of the vector v, then the derivative of f with respect to v in the direction u is the vector defined as ∂ f ∂ v ⋅ u = D f = α =0 for all vectors u. Then the derivative of f with respect to v in the u is the second order tensor defined as ∂ f ∂ v ⋅ u = D f = α =0 for all vectors u. Then the derivative of f with respect to S in the direction T is the second order tensor defined as ∂ f ∂ S, T = D f = α =0 for all second order tensors T. If T is a field of order n >1 then the divergence of the field is a tensor of order n−1. Note, the Einstein summation convention of summing on repeated indices is used below, note, the Einstein summation convention of summing on repeated indices is used below. Consider a vector v and an arbitrary constant vector c. In index notation, the product is given by v × c = e i j k v j c k e i where e i j k is the permutation symbol. In an orthonormal basis, the components of A can be written as a matrix A, in that case, the right hand side corresponds the cofactors of the matrix. Then the derivative of this tensor with respect to a second order tensor A is given by ∂1 ∂ A, T =0, T =0 This is because 1 is independent of A, let A be a second order tensor. Then ∂ A ∂ A, T = α =0 = T = I, T Therefore, when F is equal to the identity tensor, we get the divergence theorem ∫ Ω ∇ G d Ω = ∫ Γ n ⊗ G d Γ. We can express the formula for integration by parts in Cartesian index notation as ∫ Ω F i j k. In index notation, ∫ Ω F i j G p j, p d Ω = ∫ Γ n p F i j G p j d Γ − ∫ Ω G p j F i j, p d Ω, tensor derivative Directional derivative Curvilinear coordinates Continuum mechanics
Tensor derivative (continuum mechanics)
–
Domain, its boundary and the outward unit normal
65.
Curvilinear perspective
–
Curvilinear perspective is a graphical projection used to draw 3D objects on 2D surfaces. Earlier, less mathematically precise versions can be seen in the work of the miniaturist Jean Fouquet, leonardo da Vinci in a lost notebook spoke of curved perspective lines. Examples of approximated five-point perspective can also be found in the self-portrait of the mannerist painter Parmigianino seen through a shaving mirror, another example would be the curved mirror in Arnolfinis Wedding by the Flemish painter Jan van Eyck. The book Vanishing Point, Perspective for Comics from the Ground Up by Jason Cheeseman-Meyer teaches five, in 1959, Flocon had acquired a copy of Grafiek en tekeningen by M. C. Escher who strongly impressed him with his use of bent and curved perspective and they started a long correspondence, in which Escher called Flocon a kindred spirit. This technique can, like two-point perspective, use a line as a horizon line. The ellipse has the property that its axis is a diameter of the bounding circle. Graphical projection Perspective projection distortion linear perspective Mathematics and art M. C, Escher Curvilinear coordinates Drawing Comics - 5-Point Perspective House of Stairs by M. C
Curvilinear perspective
–
Curvilinearity in photography: Curvilinear (above) and
rectilinear (below) image. Notice the
barrel distortion typical for
fisheye lenses in the curvilinear image. While this example has been rectilinear-corrected by software, high quality
wide-angle lenses are built with optical rectilinear correction.
Curvilinear perspective
–
Curvilinear barrel distortion
Curvilinear perspective
–
Jean Fouquet, Arrival of Emperor Charles IV at the Basilica St Denis
Curvilinear perspective
–
Parmigianino, Self-portrait in a Convex Mirror
66.
International Standard Book Number
–
The International Standard Book Number is a unique numeric commercial book identifier. An ISBN is assigned to each edition and variation of a book, for example, an e-book, a paperback and a hardcover edition of the same book would each have a different ISBN. The ISBN is 13 digits long if assigned on or after 1 January 2007, the method of assigning an ISBN is nation-based and varies from country to country, often depending on how large the publishing industry is within a country. The initial ISBN configuration of recognition was generated in 1967 based upon the 9-digit Standard Book Numbering created in 1966, the 10-digit ISBN format was developed by the International Organization for Standardization and was published in 1970 as international standard ISO2108. Occasionally, a book may appear without a printed ISBN if it is printed privately or the author does not follow the usual ISBN procedure, however, this can be rectified later. Another identifier, the International Standard Serial Number, identifies periodical publications such as magazines, the ISBN configuration of recognition was generated in 1967 in the United Kingdom by David Whitaker and in 1968 in the US by Emery Koltay. The 10-digit ISBN format was developed by the International Organization for Standardization and was published in 1970 as international standard ISO2108, the United Kingdom continued to use the 9-digit SBN code until 1974. The ISO on-line facility only refers back to 1978, an SBN may be converted to an ISBN by prefixing the digit 0. For example, the edition of Mr. J. G. Reeder Returns, published by Hodder in 1965, has SBN340013818 -340 indicating the publisher,01381 their serial number. This can be converted to ISBN 0-340-01381-8, the check digit does not need to be re-calculated, since 1 January 2007, ISBNs have contained 13 digits, a format that is compatible with Bookland European Article Number EAN-13s. An ISBN is assigned to each edition and variation of a book, for example, an ebook, a paperback, and a hardcover edition of the same book would each have a different ISBN. The ISBN is 13 digits long if assigned on or after 1 January 2007, a 13-digit ISBN can be separated into its parts, and when this is done it is customary to separate the parts with hyphens or spaces. Separating the parts of a 10-digit ISBN is also done with either hyphens or spaces, figuring out how to correctly separate a given ISBN number is complicated, because most of the parts do not use a fixed number of digits. ISBN issuance is country-specific, in that ISBNs are issued by the ISBN registration agency that is responsible for country or territory regardless of the publication language. Some ISBN registration agencies are based in national libraries or within ministries of culture, in other cases, the ISBN registration service is provided by organisations such as bibliographic data providers that are not government funded. In Canada, ISBNs are issued at no cost with the purpose of encouraging Canadian culture. In the United Kingdom, United States, and some countries, where the service is provided by non-government-funded organisations. Australia, ISBNs are issued by the library services agency Thorpe-Bowker
International Standard Book Number
–
A 13-digit ISBN, 978-3-16-148410-0, as represented by an
EAN-13 bar code
67.
PubMed Identifier
–
PubMed is a free search engine accessing primarily the MEDLINE database of references and abstracts on life sciences and biomedical topics. The United States National Library of Medicine at the National Institutes of Health maintains the database as part of the Entrez system of information retrieval, from 1971 to 1997, MEDLINE online access to the MEDLARS Online computerized database primarily had been through institutional facilities, such as university libraries. PubMed, first released in January 1996, ushered in the era of private, free, home-, the PubMed system was offered free to the public in June 1997, when MEDLINE searches via the Web were demonstrated, in a ceremony, by Vice President Al Gore. Information about the journals indexed in MEDLINE, and available through PubMed, is found in the NLM Catalog. As of 5 January 2017, PubMed has more than 26.8 million records going back to 1966, selectively to the year 1865, and very selectively to 1809, about 500,000 new records are added each year. As of the date,13.1 million of PubMeds records are listed with their abstracts. In 2016, NLM changed the system so that publishers will be able to directly correct typos. Simple searches on PubMed can be carried out by entering key aspects of a subject into PubMeds search window, when a journal article is indexed, numerous article parameters are extracted and stored as structured information. Such parameters are, Article Type, Secondary identifiers, Language, publication type parameter enables many special features. As these clinical girish can generate small sets of robust studies with considerable precision, since July 2005, the MEDLINE article indexing process extracts important identifiers from the article abstract and puts those in a field called Secondary Identifier. The secondary identifier field is to store numbers to various databases of molecular sequence data, gene expression or chemical compounds. For clinical trials, PubMed extracts trial IDs for the two largest trial registries, ClinicalTrials. gov and the International Standard Randomized Controlled Trial Number Register, a reference which is judged particularly relevant can be marked and related articles can be identified. If relevant, several studies can be selected and related articles to all of them can be generated using the Find related data option, the related articles are then listed in order of relatedness. To create these lists of related articles, PubMed compares words from the title and abstract of each citation, as well as the MeSH headings assigned, using a powerful word-weighted algorithm. The related articles function has been judged to be so precise that some researchers suggest it can be used instead of a full search, a strong feature of PubMed is its ability to automatically link to MeSH terms and subheadings. Examples would be, bad breath links to halitosis, heart attack to myocardial infarction, where appropriate, these MeSH terms are automatically expanded, that is, include more specific terms. Terms like nursing are automatically linked to Nursing or Nursing and this important feature makes PubMed searches automatically more sensitive and avoids false-negative hits by compensating for the diversity of medical terminology. The My NCBI area can be accessed from any computer with web-access, an earlier version of My NCBI was called PubMed Cubby
PubMed Identifier
–
PubMed
68.
Wikiversity
–
Wikiversity is a Wikimedia Foundation project that supports learning communities, their learning materials, and resulting activities. It differs from more structured projects such as Wikipedia in that it offers a series of tutorials, or courses, for the fostering of learning. Wikiversitys beta phase began on August 15,2006, with the English language Wikiversity. The first project proposal was not approved and the second, modified proposal, was approved, the launch of Wikiversity was announced at Wikimania 2006 as, Wikiversity is a center for the creation of and use of free learning materials, and the provision of learning activities. Wikiversity is one of many used in educational contexts, as well as many initiatives that are creating free. The primary priorities and goals for Wikiversity are to, Create and host a range of free-content, multilingual learning materials/resources, host scholarly/learning projects and communities that support these materials. The Wikiversity e-Learning model places emphasis on learning groups and learning by doing, wikiversitys motto and slogan is set learning free, indicating that groups/communities of Wikiversity participants will engage in learning projects. Learning is facilitated through collaboration on projects that are detailed, outlined, summarized or results reported by editing Wikiversity pages, Wikiversity learning projects include collections of wiki webpages concerned with the exploration of a particular topic. Wikiversity participants are encouraged to express their learning goals, and the Wikiversity community collaborates to develop learning activities, the Wikiversity e-Learning activities give learners the opportunity to build knowledge. Students have to be aware in order to be able to correct their classmates. By doing this, students develop their reflection skills, secondly, they enable students to be autonomous deciding what to write or edit, also when and how to do it. Students are able to free resort to any mean of support, at the same time, it fosters the Cognitive development engaging students to collaborate between them. However, as the project is still in its early stages, Learning resources are developed by an individual or groups, either on their own initiative, or as part of a learning project. Texts useful to others are hosted at Wikibooks for update and maintenance, Learning groups with interests in each subject area create a web of resources that form the basis of discussions and activities at Wikiversity. Learning resources can be used by educators outside of Wikiversity for their own purposes, under the terms of the GFDL, such research content may lack any peer review. WikiJournal is a project that provides quality control by having expert peer review of all included content and this activity started with WikiJournal of Medicine in 2014. WikiJournal of Medicine can also review and publish Wikipedia content. For newly established specific language Wikiversities to move out of the initial beta phase
Wikiversity
–
Wikiversity
69.
Orthogonal coordinate system
–
In mathematics, orthogonal coordinates are defined as a set of d coordinates q = in which the coordinate surfaces all meet at right angles. A coordinate surface for a particular coordinate qk is the curve, surface, orthogonal coordinates are a special but extremely common case of curvilinear coordinates. The chief advantage of non-Cartesian coordinates is that they can be chosen to match the symmetry of the problem, the reason to prefer orthogonal coordinates instead of general curvilinear coordinates is simplicity, many complications arise when coordinates are not orthogonal. For example, in orthogonal coordinates many problems may be solved by separation of variables, separation of variables is a mathematical technique that converts a complex d-dimensional problem into d one-dimensional problems that can be solved in terms of known functions. Many equations can be reduced to Laplaces equation or the Helmholtz equation, Laplaces equation is separable in 13 orthogonal coordinate systems, and the Helmholtz equation is separable in 11 orthogonal coordinate systems. Orthogonal coordinates never have off-diagonal terms in their metric tensor and these scaling functions hi are used to calculate differential operators in the new coordinates, e. g. the gradient, the Laplacian, the divergence and the curl. A simple method for generating orthogonal coordinates systems in two dimensions is by a mapping of a standard two-dimensional grid of Cartesian coordinates. A complex number z = x + iy can be formed from the coordinates x and y. However, there are other orthogonal coordinate systems in three dimensions that cannot be obtained by projecting or rotating a system, such as the ellipsoidal coordinates. More general orthogonal coordinates may be obtained by starting with some necessary coordinate surfaces, in Cartesian coordinates, the basis vectors are fixed. What distinguishes orthogonal coordinates is that, though the basis vectors vary, note that the vectors are not necessarily of equal length. The useful functions known as factors of the coordinates are simply the lengths h i of the basis vectors e ^ i. The scale factors are sometimes called Lamé coefficients, but this terminology is best avoided since some more well known coefficients in linear elasticity carry the same name. Components in the basis are most common in applications for clarity of the quantities. The basis vectors shown above are covariant basis vectors, while a vector is an objective quantity, meaning its identity is independent of any coordinate system, the components of a vector depend on what basis the vector is represented in. Note that the summation symbols Σ and the range, indicating summation over all basis vectors, are often omitted. Vector addition and negation are done component-wise just as in Cartesian coordinates with no complication, extra considerations may be necessary for other vector operations. Note however, that all of these operations assume that two vectors in a field are bound to the same point
Orthogonal coordinate system
–
A
conformal map acting on a rectangular grid. Note that the orthogonality of the curved grid is retained.
70.
Bipolar coordinates
–
Bipolar coordinates are a two-dimensional orthogonal coordinate system. There are two commonly defined types of bipolar coordinates, the first is based on the Apollonian circles. The curves of constant σ and of τ are circles that intersect at right angles, the coordinates have two foci F1 and F2, which are generally taken to be fixed at and, respectively, on the x-axis of a Cartesian coordinate system. The second system is two-center bipolar coordinates, there is also a third coordinate system that is based on two poles. The term bipolar is sometimes used to other curves having two singular points, such as ellipses, hyperbolas, and Cassini ovals. However, the bipolar coordinates is reserved for the coordinates described here. The centers of the circles lie on the y-axis. Circles of positive σ are centered above the x-axis, whereas those of negative σ lie below the axis, as the magnitude |σ| increases, the radius of the circles decreases and the center approaches the origin, which is reached when |σ| = π/2, its maximum value. The curves of constant τ are non-intersecting circles of different radii y 2 +2 = a 2 sinh 2 τ that surround the foci, the centers of the constant-τ circles lie on the x-axis. The circles of positive τ lie in the side of the plane. The τ =0 curve corresponds to the y-axis, as the magnitude of τ increases, the radius of the circles decreases and their centers approach the foci. We notice also those two identities, tanh τ =2 a x x 2 + y 2 + a 2. A typical example would be the field surrounding two parallel cylindrical conductors. Bipolar coordinates form the basis for several sets of orthogonal coordinates. The bipolar cylindrical coordinates are produced by projecting in the z-direction, H. Bateman Spheroidal and bipolar coordinates, Duke Mathematical Journal 4, no. 1, 39–50 Hazewinkel, Michiel, ed. Bipolar coordinates, Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4 Lockwood, chapter 25 in A Book of Curves. Cambridge, England, Cambridge University Press, pp. 186–190,1967, Mathematical Handbook for Scientists and Engineers, McGraw-Hill
Bipolar coordinates
–
Bipolar coordinate system
71.
Elliptic coordinates
–
In geometry, the elliptic coordinate system is a two-dimensional orthogonal coordinate system in which the coordinate lines are confocal ellipses and hyperbolae. The two foci F1 and F2 are generally taken to be fixed at − a and + a, respectively, on the x -axis of the Cartesian coordinate system. The most common definition of coordinates is x = a cosh μ cos ν y = a sinh μ sin ν where μ is a nonnegative real number. On the complex plane, an equivalent relationship is x + i y = a cosh These definitions correspond to ellipses, in an orthogonal coordinate system the lengths of the basis vectors are known as scale factors. The scale factors for the coordinates are equal to h μ = h ν = a sinh 2 μ + sin 2 ν = a cosh 2 μ − cos 2 ν. Using the double argument identities for hyperbolic functions and trigonometric functions, other differential operators such as ∇ ⋅ F and ∇ × F can be expressed in the coordinates by substituting the scale factors into the general formulae found in orthogonal coordinates. An alternative and geometrically intuitive set of coordinates are sometimes used. Hence, the curves of constant σ are ellipses, whereas the curves of constant τ are hyperbolae, the coordinate τ must belong to the interval, whereas the σ coordinate must be greater than or equal to one. The coordinates have a relation to the distances to the foci F1 and F2. For any point in the plane, the sum d 1 + d 2 of its distances to the foci equals 2 a σ, thus, the distance to F1 is a, whereas the distance to F2 is a. A drawback of these coordinates is that the points with Cartesian coordinates and have the coordinates, so the conversion to Cartesian coordinates is not a function. The scale factors for the elliptic coordinates are h σ = a σ2 − τ2 σ2 −1 h τ = a σ2 − τ21 − τ2. Hence, the area element becomes d A = a 2 σ2 − τ2 d σ d τ. Other differential operators such as ∇ ⋅ F and ∇ × F can be expressed in the coordinates by substituting the scale factors into the general found in orthogonal coordinates. Elliptic coordinates form the basis for several sets of orthogonal coordinates. The elliptic cylindrical coordinates are produced by projecting in the z -direction, some traditional examples are solving systems such as electrons orbiting a molecule or planetary orbits that have an elliptical shape. The geometric properties of elliptic coordinates can also be useful, for concreteness, r, p and q could represent the momenta of a particle and its decomposition products, respectively, and the integrand might involve the kinetic energies of the products. Curvilinear coordinates Generalized coordinates Hazewinkel, Michiel, ed. Elliptic coordinates, Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4 Korn GA, mathematical Handbook for Scientists and Engineers, McGraw-Hill
Elliptic coordinates
–
Elliptic coordinate system
72.
Spherical coordinate system
–
It can be seen as the three-dimensional version of the polar coordinate system. The radial distance is called the radius or radial coordinate. The polar angle may be called colatitude, zenith angle, normal angle, the use of symbols and the order of the coordinates differs between sources. In both systems ρ is often used instead of r, other conventions are also used, so great care needs to be taken to check which one is being used. A number of different spherical coordinate systems following other conventions are used outside mathematics, in a geographical coordinate system positions are measured in latitude, longitude and height or altitude. There are a number of different celestial coordinate systems based on different fundamental planes, the polar angle is often replaced by the elevation angle measured from the reference plane. Elevation angle of zero is at the horizon, the spherical coordinate system generalises the two-dimensional polar coordinate system. It can also be extended to spaces and is then referred to as a hyperspherical coordinate system. To define a coordinate system, one must choose two orthogonal directions, the zenith and the azimuth reference, and an origin point in space. These choices determine a plane that contains the origin and is perpendicular to the zenith. The spherical coordinates of a point P are then defined as follows, the inclination is the angle between the zenith direction and the line segment OP. The azimuth is the angle measured from the azimuth reference direction to the orthogonal projection of the line segment OP on the reference plane. The sign of the azimuth is determined by choosing what is a sense of turning about the zenith. This choice is arbitrary, and is part of the coordinate systems definition, the elevation angle is 90 degrees minus the inclination angle. If the inclination is zero or 180 degrees, the azimuth is arbitrary, if the radius is zero, both azimuth and inclination are arbitrary. In linear algebra, the vector from the origin O to the point P is often called the vector of P. Several different conventions exist for representing the three coordinates, and for the order in which they should be written. The use of to denote radial distance, inclination, and azimuth, respectively, is common practice in physics, and is specified by ISO standard 80000-2,2009, and earlier in ISO 31-11
Spherical coordinate system
–
Spherical coordinates (r, θ, φ) as commonly used in physics: radial distance r, polar angle θ (
theta), and azimuthal angle φ (
phi). The symbol ρ (
rho) is often used instead of r.
73.
Parabolic cylindrical coordinates
–
Hence, the coordinate surfaces are confocal parabolic cylinders. Parabolic cylindrical coordinates have found applications, e. g. the potential theory of edges. e. The foci of all these cylinders are located along the line defined by x = y =0. g. Laplaces equation or the Helmholtz equation, for which such coordinates allow a separation of variables, a typical example would be the electric field surrounding a flat semi-infinite conducting plate. Parabolic coordinates Orthogonal coordinate system Curvilinear coordinates Morse PM, Feshbach H, methods of Theoretical Physics, Part I. The Mathematics of Physics and Chemistry, mathematical Handbook for Scientists and Engineers. Same as Morse & Feshbach, substituting uk for ξk, field Theory Handbook, Including Coordinate Systems, Differential Equations, and Their Solutions. MathWorld description of parabolic cylindrical coordinates
Parabolic cylindrical coordinates
–
Coordinate surfaces of parabolic cylindrical coordinates. The red parabolic cylinder corresponds to σ=2, whereas the yellow parabolic cylinder corresponds to τ=1. The blue plane corresponds to z =2. These surfaces intersect at the point P (shown as a black sphere), which has
Cartesian coordinates roughly (2, -1.5, 2).
74.
Paraboloidal coordinates
–
Paraboloidal coordinates are a three-dimensional orthogonal coordinate system that generalizes the two-dimensional parabolic coordinate system. Methods of Theoretical Physics, Part I, the Mathematics of Physics and Chemistry. Mathematical Handbook for Scientists and Engineers, arfken G. Mathematical Methods for Physicists. Same as Morse & Feshbach, substituting uk for ξk, field Theory Handbook, Including Coordinate Systems, Differential Equations, and Their Solutions. MathWorld description of confocal paraboloidal coordinates
Paraboloidal coordinates
–
Coordinate surfaces of the three-dimensional paraboloidal coordinates.
75.
Prolate spheroidal coordinates
–
Rotation about the other axis produces oblate spheroidal coordinates. Prolate spheroidal coordinates can also be considered as a case of ellipsoidal coordinates in which the two smallest principal axes are equal in length. One example is solving for the wavefunction of an electron moving in the field of two positively charged nuclei, as in the hydrogen molecular ion, H2+. Another example is solving for the field generated by two small electrode tips. Other limiting cases include areas generated by a segment or a line with a missing segment. The azimuthal angle φ belongs to the interval, the distances from the foci located at = are r ± = x 2 + y 2 +2 = a. An alternative and geometrically intuitive set of prolate spheroidal coordinates are sometimes used, hence, the curves of constant σ are prolate spheroids, whereas the curves of constant τ are hyperboloids of revolution. The coordinate τ belongs to the interval, whereas the σ coordinate must be greater than or equal to one, the coordinates σ and τ have a simple relation to the distances to the foci F1 and F2. For any point in the plane, the sum d 1 + d 2 of its distances to the foci equals 2 a σ, thus, the distance to F1 is a, whereas the distance to F2 is a. Methods of Theoretical Physics, Part I, uses ξ1 = a cosh μ, ξ2 = sin ν, and ξ3 = cos φ. Same as Morse & Feshbach, substituting uk for ξk, uses coordinates ξ = cosh μ, η = sin ν, and φ. Mathematical Handbook for Scientists and Engineers, Korn and Korn use the coordinates, but also introduce the degenerate coordinates. The Mathematics of Physics and Chemistry, similar to Korn and Korn, but uses colatitude θ = 90° - ν instead of latitude ν. Moon PH, Spencer DE. Field Theory Handbook, Including Coordinate Systems, Differential Equations, Moon and Spencer use the colatitude convention θ = 90° − ν, and rename φ as ψ. Landau LD, Lifshitz EM, Pitaevskii LP, treats the prolate spheroidal coordinates as a limiting case of the general ellipsoidal coordinates. Uses coordinates that have the units of distance squared, mathWorld description of prolate spheroidal coordinates
Prolate spheroidal coordinates
–
The three
coordinate surfaces of prolate spheroidal coordinates. The red prolate spheroid (stretched sphere) corresponds to μ=1, and the blue two-sheet
hyperboloid corresponds to ν=45°. The yellow half-plane corresponds to φ=-60°, which is measured relative to the x -axis (highlighted in green). The black sphere represents the intersection point of the three surfaces, which has
Cartesian coordinates of roughly (0.831, -1.439, 2.182).
76.
Elliptic cylindrical coordinates
–
Elliptic cylindrical coordinates are a three-dimensional orthogonal coordinate system that results from projecting the two-dimensional elliptic coordinate system in the perpendicular z -direction. Hence, the surfaces are prisms of confocal ellipses and hyperbolae. The two foci F1 and F2 are generally taken to be fixed at − a and + a, respectively and these definitions correspond to ellipses and hyperbolae. The scale factors for the cylindrical coordinates μ and ν are equal h μ = h ν = a sinh 2 μ + sin 2 ν whereas the remaining scale factor h z =1. An alternative and geometrically intuitive set of coordinates are sometimes used. Hence, the curves of constant σ are ellipses, whereas the curves of constant τ are hyperbolae, the coordinate τ must belong to the interval, whereas the σ coordinate must be greater than or equal to one. The coordinates have a relation to the distances to the foci F1 and F2. For any point in the plane, the sum d 1 + d 2 of its distances to the foci equals 2 a σ, thus, the distance to F1 is a, whereas the distance to F2 is a. A typical example would be the field surrounding a flat conducting plate of width 2 a. The three-dimensional wave equation, when expressed in cylindrical coordinates, may be solved by separation of variables. The geometric properties of elliptic coordinates can also be useful, for concreteness, r, p and q could represent the momenta of a particle and its decomposition products, respectively, and the integrand might involve the kinetic energies of the products. Methods of Theoretical Physics, Part I, the Mathematics of Physics and Chemistry. Mathematical Handbook for Scientists and Engineers, same as Morse & Feshbach, substituting uk for ξk. Field Theory Handbook, Including Coordinate Systems, Differential Equations, mathWorld description of elliptic cylindrical coordinates
Elliptic cylindrical coordinates
–
Coordinate surfaces of elliptic cylindrical coordinates. The yellow sheet is the prism of a half-hyperbola corresponding to ν=-45°, whereas the red tube is an elliptical prism corresponding to μ=1. The blue sheet corresponds to z =1. The three surfaces intersect at the point P (shown as a black sphere) with
Cartesian coordinates roughly (2.182, -1.661, 1.0). The foci of the ellipse and hyperbola lie at x = ±2.0.
77.
Toroidal coordinates
–
Toroidal coordinates are a three-dimensional orthogonal coordinate system that results from rotating the two-dimensional bipolar coordinate system about the axis that separates its two foci. Thus, the two foci F1 and F2 in bipolar coordinates become a ring of radius a in the x y plane of the coordinate system. The focal ring is known as the reference circle. The coordinate ranges are − π < σ ≤ π and τ ≥0 and 0 ≤ ϕ <2 π. Surfaces of constant σ correspond to spheres of different radii +2 = a 2 sin 2 σ that all pass through the ring but are not concentric. The surfaces of constant τ are non-intersecting tori of different radii z 2 +2 = a 2 sinh 2 τ that surround the focal ring. The centers of the constant- σ spheres lie along the z -axis, the coordinates may be calculated from the Cartesian coordinates as follows. The 3-variable Laplace equation ∇2 Φ =0 admits solution via separation of variables in toroidal coordinates, making the substitution Φ = U cosh τ − cos σ A separable equation is then obtained. These Legendre functions are referred to as toroidal harmonics. Toroidal harmonics have many interesting properties, the rest of the toroidal harmonics can be obtained, for instance, in terms of the complete elliptic integrals, by using recurrence relations for associated Legendre functions. Typical examples would be the potential and electric field of a conducting torus, or in the degenerate case. Alternatively, a different substitution may be made Φ = U ρ where ρ = x 2 + y 2 = a sinh τ cosh τ − cos σ, again, a separable equation is obtained. Note that although the toroidal harmonics are used again for the T function, the argument is coth τ rather than cosh τ and the μ and ν indices are exchanged. This method is useful for situations in which the conditions are independent of the spherical angle θ, such as the charged ring. For identities relating the toroidal harmonics with argument hyperbolic cosine with those of argument hyperbolic cotangent, alternative separation of Laplaces equation in toroidal coordinates and its application to electrostatics. A note on the scalar potential of an electric current-ring. Mathematical Proceedings of the Cambridge Philosophical Society, methods of Theoretical Physics, Part I. Korn G A, Korn T M. Mathematical Handbook for Scientists, the Mathematics of Physics and Chemistry
Toroidal coordinates
78.
Bispherical coordinates
–
Bispherical coordinates are a three-dimensional orthogonal coordinate system that results from rotating the two-dimensional bipolar coordinate system about the axis that connects the two foci. Thus, the two foci F1 and F2 in bipolar coordinates remain points in the coordinate system. The surfaces of constant τ are non-intersecting spheres of different radii +2 = a 2 sinh 2 τ that surround the foci, the centers of the constant- τ spheres lie along the z -axis, whereas the constant- σ tori are centered in the x y plane. The formulae for the transformation are, σ = arccos τ = arsinh ϕ = atan where R = x 2 + y 2 + z 2 and Q =2 −2. The classic applications of bispherical coordinates are in solving differential equations, e. g. Laplaces equation. However, the Helmholtz equation is not separable in bispherical coordinates, a typical example would be the electric field surrounding two conducting spheres of different radii. Methods of Theoretical Physics, Part I, mathematical Handbook for Scientists and Engineers. Field Theory Handbook, Including Coordinate Systems, Differential Equations, and Their Solutions
Bispherical coordinates
79.
Bipolar cylindrical coordinates
–
Bipolar cylindrical coordinates are a three-dimensional orthogonal coordinate system that results from projecting the two-dimensional bipolar coordinate system in the perpendicular z -direction. The two lines of foci F1 and F2 of the projected Apollonian circles are generally taken to be defined by x = − a and x = + a, respectively, in the Cartesian coordinate system. The term bipolar is often used to other curves having two singular points, such as ellipses, hyperbolas, and Cassini ovals. However, the bipolar coordinates is never used to describe coordinates associated with those curves. The surfaces of constant τ are non-intersecting cylinders of different radii y 2 +2 = a 2 sinh 2 τ that surround the focal lines, the focal lines and all these cylinders are parallel to the z -axis. In the z =0 plane, the centers of the constant- σ and constant- τ cylinders lie on the y and x axes, respectively. The scale factors for the bipolar coordinates σ and τ are equal h σ = h τ = a cosh τ − cos σ whereas the remaining scale factor h z =1, a typical example would be the electric field surrounding two parallel cylindrical conductors. The Mathematics of Physics and Chemistry, mathematical Handbook for Scientists and Engineers. Field Theory Handbook, Including Coordinate Systems, Differential Equations, mathWorld description of bipolar cylindrical coordinates
Bipolar cylindrical coordinates
–
Coordinate surfaces of the bipolar cylindrical coordinates. The yellow crescent corresponds to σ, whereas the red tube corresponds to τ and the blue plane corresponds to z =1. The three surfaces intersect at the point P (shown as a black sphere).
80.
Conical coordinates
–
Conical coordinates are a three-dimensional orthogonal coordinate system consisting of concentric spheres and by two families of perpendicular cones, aligned along the z- and x-axes, respectively. The conical coordinates are defined by x = r μ ν b c y = r b z = r c with the limitations on the coordinates ν2 < c 2 < μ2 < b 2. In this coordinate system, both Laplaces equation and the Helmholtz equation are separable, the scale factor for the radius r is one, as in spherical coordinates. The scale factors for the two coordinates are h μ = r μ2 − ν2 and h ν = r μ2 − ν2. An alternative set of coordinates have been derived ξ = r cos ψ = r sin ζ = θ. The corresponding inverse relations are r = ξ2 + ψ2 ϕ =1 sin ζ arctan θ = ζ. If the path between any two points is constrained to surface of the cone given by ζ = π4 then the distance between any two points and is s 122 =2 +2. Methods of Theoretical Physics, Part I, the Mathematics of Physics and Chemistry. Mathematical Handbook for Scientists and Engineers, arfken G. Mathematical Methods for Physicists. Field Theory Handbook, Including Coordinate Systems, Differential Equations, and Their Solutions
Conical coordinates