1.
Orthogonal coordinates
–
In mathematics, orthogonal coordinates are defined as a set of d coordinates q = in which the coordinate surfaces all meet at right angles. A coordinate surface for a particular coordinate qk is the curve, surface, orthogonal coordinates are a special but extremely common case of curvilinear coordinates. The chief advantage of non-Cartesian coordinates is that they can be chosen to match the symmetry of the problem, the reason to prefer orthogonal coordinates instead of general curvilinear coordinates is simplicity, many complications arise when coordinates are not orthogonal. For example, in orthogonal coordinates many problems may be solved by separation of variables, separation of variables is a mathematical technique that converts a complex d-dimensional problem into d one-dimensional problems that can be solved in terms of known functions. Many equations can be reduced to Laplaces equation or the Helmholtz equation, Laplaces equation is separable in 13 orthogonal coordinate systems, and the Helmholtz equation is separable in 11 orthogonal coordinate systems. Orthogonal coordinates never have off-diagonal terms in their metric tensor and these scaling functions hi are used to calculate differential operators in the new coordinates, e. g. the gradient, the Laplacian, the divergence and the curl. A simple method for generating orthogonal coordinates systems in two dimensions is by a mapping of a standard two-dimensional grid of Cartesian coordinates. A complex number z = x + iy can be formed from the coordinates x and y. However, there are other orthogonal coordinate systems in three dimensions that cannot be obtained by projecting or rotating a system, such as the ellipsoidal coordinates. More general orthogonal coordinates may be obtained by starting with some necessary coordinate surfaces, in Cartesian coordinates, the basis vectors are fixed. What distinguishes orthogonal coordinates is that, though the basis vectors vary, note that the vectors are not necessarily of equal length. The useful functions known as factors of the coordinates are simply the lengths h i of the basis vectors e ^ i. The scale factors are sometimes called Lamé coefficients, but this terminology is best avoided since some more well known coefficients in linear elasticity carry the same name. Components in the basis are most common in applications for clarity of the quantities. The basis vectors shown above are covariant basis vectors, while a vector is an objective quantity, meaning its identity is independent of any coordinate system, the components of a vector depend on what basis the vector is represented in. Note that the summation symbols Σ and the range, indicating summation over all basis vectors, are often omitted. Vector addition and negation are done component-wise just as in Cartesian coordinates with no complication, extra considerations may be necessary for other vector operations. Note however, that all of these operations assume that two vectors in a field are bound to the same point
Orthogonal coordinates
–
Visualization of 2D orthogonal coordinates. Curves obtained by holding all but one coordinate constant are shown, along with basis vectors. Note that the basis vectors aren't of equal length: they need not be, they only need to be orthogonal.
2.
Skew coordinates
–
A system of skew coordinates is a curvilinear coordinate system where the coordinate surfaces are not orthogonal, in contrast to orthogonal coordinates. These coordinate systems can be if the geometry of a problem fits well into a skewed system. For example, solving Laplaces equation in a parallelogram will be easiest when done in appropriately skewed coordinates. The simplest 3D case of a coordinate system is a Cartesian one where one of the axes has been bent by some angle ϕ. For this example, the x axis of a Cartesian coordinate has been bent toward the z axis by ϕ, let e 1, e 2, and e 3 respectively be unit vectors along the x, y, and z axes. Well favor writing quantities with respect to the covariant basis, since the basis vectors are all constant, vector addition and subtraction will simply be familiar component-wise adding and subtraction. Now, let a = ∑ i a i e i and b = ∑ i b i e i where the sums indicate summation over all values of the index. Finally, the curl of a vector is ∇ × a = ∑ i, j, k e k ϵ i j k ∂ a i ∂ q i =1 cos
Skew coordinates
–
A Cartesian coordinate system where the x axis has been bent toward the z axis.
3.
Coordinate line
–
The order of the coordinates is significant, and they are sometimes identified by their position in an ordered tuple and sometimes by a letter, as in the x-coordinate. The coordinates are taken to be real numbers in elementary mathematics, the use of a coordinate system allows problems in geometry to be translated into problems about numbers and vice versa, this is the basis of analytic geometry. The simplest example of a system is the identification of points on a line with real numbers using the number line. In this system, an arbitrary point O is chosen on a given line. The coordinate of a point P is defined as the distance from O to P. Each point is given a unique coordinate and each number is the coordinate of a unique point. The prototypical example of a system is the Cartesian coordinate system. In the plane, two lines are chosen and the coordinates of a point are taken to be the signed distances to the lines. In three dimensions, three perpendicular planes are chosen and the three coordinates of a point are the distances to each of the planes. This can be generalized to create n coordinates for any point in n-dimensional Euclidean space, depending on the direction and order of the coordinate axis the system may be a right-hand or a left-hand system. This is one of many coordinate systems, another common coordinate system for the plane is the polar coordinate system. A point is chosen as the pole and a ray from this point is taken as the polar axis, for a given angle θ, there is a single line through the pole whose angle with the polar axis is θ. Then there is a point on this line whose signed distance from the origin is r for given number r. For a given pair of coordinates there is a single point, for example, and are all polar coordinates for the same point. The pole is represented by for any value of θ, there are two common methods for extending the polar coordinate system to three dimensions. In the cylindrical coordinate system, a z-coordinate with the meaning as in Cartesian coordinates is added to the r and θ polar coordinates giving a triple. Spherical coordinates take this a further by converting the pair of cylindrical coordinates to polar coordinates giving a triple. A point in the plane may be represented in coordinates by a triple where x/z and y/z are the Cartesian coordinates of the point
Coordinate line
–
The spherical coordinate system is commonly used in physics. It assigns three numbers (known as coordinates) to every point in Euclidean space: radial distance r, polar angle θ (theta), and azimuthal angle φ (phi). The symbol ρ (rho) is often used instead of r.
4.
Cartesian coordinate
–
Each reference line is called a coordinate axis or just axis of the system, and the point where they meet is its origin, usually at ordered pair. The coordinates can also be defined as the positions of the projections of the point onto the two axis, expressed as signed distances from the origin. One can use the principle to specify the position of any point in three-dimensional space by three Cartesian coordinates, its signed distances to three mutually perpendicular planes. In general, n Cartesian coordinates specify the point in an n-dimensional Euclidean space for any dimension n and these coordinates are equal, up to sign, to distances from the point to n mutually perpendicular hyperplanes. The invention of Cartesian coordinates in the 17th century by René Descartes revolutionized mathematics by providing the first systematic link between Euclidean geometry and algebra. Using the Cartesian coordinate system, geometric shapes can be described by Cartesian equations, algebraic equations involving the coordinates of the points lying on the shape. For example, a circle of radius 2, centered at the origin of the plane, a familiar example is the concept of the graph of a function. Cartesian coordinates are also tools for most applied disciplines that deal with geometry, including astronomy, physics, engineering. They are the most common system used in computer graphics, computer-aided geometric design. Nicole Oresme, a French cleric and friend of the Dauphin of the 14th Century, used similar to Cartesian coordinates well before the time of Descartes. The adjective Cartesian refers to the French mathematician and philosopher René Descartes who published this idea in 1637 and it was independently discovered by Pierre de Fermat, who also worked in three dimensions, although Fermat did not publish the discovery. Both authors used a single axis in their treatments and have a length measured in reference to this axis. The concept of using a pair of axes was introduced later, after Descartes La Géométrie was translated into Latin in 1649 by Frans van Schooten and these commentators introduced several concepts while trying to clarify the ideas contained in Descartes work. Many other coordinate systems have developed since Descartes, such as the polar coordinates for the plane. The development of the Cartesian coordinate system would play a role in the development of the Calculus by Isaac Newton. The two-coordinate description of the plane was later generalized into the concept of vector spaces. Choosing a Cartesian coordinate system for a one-dimensional space – that is, for a straight line—involves choosing a point O of the line, a unit of length, and an orientation for the line. An orientation chooses which of the two half-lines determined by O is the positive, and which is negative, we say that the line is oriented from the negative half towards the positive half
Cartesian coordinate
–
The right hand rule.
Cartesian coordinate
–
Illustration of a Cartesian coordinate plane. Four points are marked and labeled with their coordinates: (2,3) in green, (−3,1) in red, (−1.5,−2.5) in blue, and the origin (0,0) in purple.
Cartesian coordinate
–
3D Cartesian Coordinate Handedness
5.
Spherical coordinates
–
It can be seen as the three-dimensional version of the polar coordinate system. The radial distance is called the radius or radial coordinate. The polar angle may be called colatitude, zenith angle, normal angle, the use of symbols and the order of the coordinates differs between sources. In both systems ρ is often used instead of r, other conventions are also used, so great care needs to be taken to check which one is being used. A number of different spherical coordinate systems following other conventions are used outside mathematics, in a geographical coordinate system positions are measured in latitude, longitude and height or altitude. There are a number of different celestial coordinate systems based on different fundamental planes, the polar angle is often replaced by the elevation angle measured from the reference plane. Elevation angle of zero is at the horizon, the spherical coordinate system generalises the two-dimensional polar coordinate system. It can also be extended to spaces and is then referred to as a hyperspherical coordinate system. To define a coordinate system, one must choose two orthogonal directions, the zenith and the azimuth reference, and an origin point in space. These choices determine a plane that contains the origin and is perpendicular to the zenith. The spherical coordinates of a point P are then defined as follows, the inclination is the angle between the zenith direction and the line segment OP. The azimuth is the angle measured from the azimuth reference direction to the orthogonal projection of the line segment OP on the reference plane. The sign of the azimuth is determined by choosing what is a sense of turning about the zenith. This choice is arbitrary, and is part of the coordinate systems definition, the elevation angle is 90 degrees minus the inclination angle. If the inclination is zero or 180 degrees, the azimuth is arbitrary, if the radius is zero, both azimuth and inclination are arbitrary. In linear algebra, the vector from the origin O to the point P is often called the vector of P. Several different conventions exist for representing the three coordinates, and for the order in which they should be written. The use of to denote radial distance, inclination, and azimuth, respectively, is common practice in physics, and is specified by ISO standard 80000-2,2009, and earlier in ISO 31-11
Spherical coordinates
–
Spherical coordinates (r, θ, φ) as commonly used in physics: radial distance r, polar angle θ (theta), and azimuthal angle φ (phi). The symbol ρ (rho) is often used instead of r.
6.
Scalar (mathematics)
–
A scalar is an element of a field which is used to define a vector space. A quantity described by multiple scalars, such as having both direction and magnitude, is called a vector, more generally, a vector space may be defined by using any field instead of real numbers, such as complex numbers. Then the scalars of that space will be the elements of the associated field. A scalar product operation – not to be confused with scalar multiplication – may be defined on a vector space, a vector space equipped with a scalar product is called an inner product space. The real component of a quaternion is also called its scalar part, the term is also sometimes used informally to mean a vector, matrix, tensor, or other usually compound value that is actually reduced to a single component. Thus, for example, the product of a 1×n matrix and an n×1 matrix, the term scalar matrix is used to denote a matrix of the form kI where k is a scalar and I is the identity matrix. The word scalar derives from the Latin word scalaris, a form of scala. The English word scale also comes from scala, according to a citation in the Oxford English Dictionary the first recorded usage of the term scalar in English came with W. R. A vector space is defined as a set of vectors, a set of scalars, and a multiplication operation that takes a scalar k. For example, in a space, the scalar multiplication k yields. In a function space, kƒ is the function x ↦ k, the scalars can be taken from any field, including the rational, algebraic, real, and complex numbers, as well as finite fields. According to a theorem of linear algebra, every vector space has a basis. It follows that every vector space over a scalar field K is isomorphic to a vector space where the coordinates are elements of K. For example, every vector space of dimension n is isomorphic to n-dimensional real space Rn. Alternatively, a vector space V can be equipped with a function that assigns to every vector v in V a scalar ||v||. By definition. If ||v|| is interpreted as the length of v, this operation can be described as scaling the length of v by k, a vector space equipped with a norm is called a normed vector space. The norm is defined to be an element of Vs scalar field K. Moreover, if V has dimension 2 or more, K must be closed under square root, as well as the four operations, thus the rational numbers Q are excluded
Scalar (mathematics)
–
Scalars are real numbers used in linear algebra, as opposed to vectors. This image shows a Euclidean vector. Its coordinates x and y are scalars, as is its length, but v is not a scalar.
7.
Tensor
–
In mathematics, tensors are geometric objects that describe linear relations between geometric vectors, scalars, and other tensors. Elementary examples of such include the dot product, the cross product. Geometric vectors, often used in physics and engineering applications, given a coordinate basis or fixed frame of reference, a tensor can be represented as an organized multidimensional array of numerical values. The order of a tensor is the dimensionality of the array needed to represent it, or equivalently, for example, a linear map is represented by a matrix in a basis, and therefore is a 2nd-order tensor. A vector is represented as a 1-dimensional array in a basis, scalars are single numbers and are thus 0th-order tensors. Because they express a relationship between vectors, tensors themselves must be independent of a choice of coordinate system. The precise form of the transformation law determines the type of the tensor, the tensor type is a pair of natural numbers, where n is the number of contravariant indices and m is the number of covariant indices. The total order of a tensor is the sum of two numbers. The concept enabled an alternative formulation of the differential geometry of a manifold in the form of the Riemann curvature tensor. There are several approaches to defining tensors, although seemingly different, the approaches just describe the same geometric concept using different languages and at different levels of abstraction. For example, an operator is represented in a basis as a two-dimensional square n × n array. The numbers in the array are known as the scalar components of the tensor or simply its components. They are denoted by giving their position in the array, as subscripts and superscripts. For example, the components of an order 2 tensor T could be denoted Tij , whether an index is displayed as a superscript or subscript depends on the transformation properties of the tensor, described below. The total number of required to identify each component uniquely is equal to the dimension of the array. However, the term generally has another meaning in the context of matrices. Just as the components of a change when we change the basis of the vector space. Each tensor comes equipped with a law that details how the components of the tensor respond to a change of basis
Tensor
–
Cauchy stress tensor, a second-order tensor. The tensor's components, in a three-dimensional Cartesian coordinate system, form the matrix whose columns are the stresses (forces per unit area) acting on the e 1, e 2, and e 3 faces of the cube.
8.
Tensor analysis
–
In mathematics, tensors are geometric objects that describe linear relations between geometric vectors, scalars, and other tensors. Elementary examples of such include the dot product, the cross product. Geometric vectors, often used in physics and engineering applications, given a coordinate basis or fixed frame of reference, a tensor can be represented as an organized multidimensional array of numerical values. The order of a tensor is the dimensionality of the array needed to represent it, or equivalently, for example, a linear map is represented by a matrix in a basis, and therefore is a 2nd-order tensor. A vector is represented as a 1-dimensional array in a basis, scalars are single numbers and are thus 0th-order tensors. Because they express a relationship between vectors, tensors themselves must be independent of a choice of coordinate system. The precise form of the transformation law determines the type of the tensor, the tensor type is a pair of natural numbers, where n is the number of contravariant indices and m is the number of covariant indices. The total order of a tensor is the sum of two numbers. The concept enabled an alternative formulation of the differential geometry of a manifold in the form of the Riemann curvature tensor. There are several approaches to defining tensors, although seemingly different, the approaches just describe the same geometric concept using different languages and at different levels of abstraction. For example, an operator is represented in a basis as a two-dimensional square n × n array. The numbers in the array are known as the scalar components of the tensor or simply its components. They are denoted by giving their position in the array, as subscripts and superscripts. For example, the components of an order 2 tensor T could be denoted Tij , whether an index is displayed as a superscript or subscript depends on the transformation properties of the tensor, described below. The total number of required to identify each component uniquely is equal to the dimension of the array. However, the term generally has another meaning in the context of matrices. Just as the components of a change when we change the basis of the vector space. Each tensor comes equipped with a law that details how the components of the tensor respond to a change of basis
Tensor analysis
–
Cauchy stress tensor, a second-order tensor. The tensor's components, in a three-dimensional Cartesian coordinate system, form the matrix whose columns are the stresses (forces per unit area) acting on the e 1, e 2, and e 3 faces of the cube.
9.
Curl (mathematics)
–
In vector calculus, the curl is a vector operator that describes the infinitesimal rotation of a 3-dimensional vector field. At every point in the field, the curl of that point is represented by a vector, the attributes of this vector characterize the rotation at that point. The direction of the curl is the axis of rotation, as determined by the rule. If the vector represents the flow velocity of a moving fluid. A vector field whose curl is zero is called irrotational, the curl is a form of differentiation for vector fields. The alternative terminology rotor or rotational and alternative notations rot F and ∇ × F are often used for curl F and this is a similar phenomenon as in the 3 dimensional cross product, and the connection is reflected in the notation ∇ × for the curl. The name curl was first suggested by James Clerk Maxwell in 1871, the curl of a vector field F, denoted by curl F, or ∇ × F, or rot F, at a point is defined in terms of its projection onto various lines through the point. As such, the curl operator maps continuously differentiable functions f, ℝ3 → ℝ3 to continuous functions g, in fact, it maps Ck functions in ℝ3 to Ck −1 functions in ℝ3. Implicitly, curl is defined by, ⋅ n ^ = d e f lim A →0 where ∮C F · dr is a line integral along the boundary of the area in question, and | A | is the magnitude of the area. Note that the equation for each component, k can be obtained by exchanging each occurrence of a subscript 1,2,3 in cyclic permutation, 1→2, 2→3, and 3→1. If are the Cartesian coordinates and are the coordinates, then h i =2 +2 +2 is the length of the coordinate vector corresponding to ui. The remaining two components of curl result from cyclic permutation of indices,3,1,2 →1,2,3 →2,3,1. Suppose the vector field describes the velocity field of a fluid flow, if the ball has a rough surface, the fluid flowing past it will make it rotate. The rotation axis points in the direction of the curl of the field at the centre of the ball, and the angular speed of the rotation is half the magnitude of the curl at this point. The notation ∇ × F has its origins in the similarities to the 3-dimensional cross product, such notation involving operators is common in physics and algebra. However, in coordinate systems, such as polar-toroidal coordinates. This expands as follows, i + j + k Although expressed in terms of coordinates, equivalently, = e k ε k l m ∇ l F m where ek are the coordinate vector fields. Equivalently, using the derivative, the curl can be expressed as, ∇ × F = ♯ Here ♭ and ♯ are the musical isomorphisms
Curl (mathematics)
–
The components of F at position r, normal and tangent to a closed curve C in a plane, enclosing a planar vector area A = A n.
10.
Quantum mechanics
–
Quantum mechanics, including quantum field theory, is a branch of physics which is the fundamental theory of nature at small scales and low energies of atoms and subatomic particles. Classical physics, the physics existing before quantum mechanics, derives from quantum mechanics as an approximation valid only at large scales, early quantum theory was profoundly reconceived in the mid-1920s. The reconceived theory is formulated in various specially developed mathematical formalisms, in one of them, a mathematical function, the wave function, provides information about the probability amplitude of position, momentum, and other physical properties of a particle. In 1803, Thomas Young, an English polymath, performed the famous experiment that he later described in a paper titled On the nature of light. This experiment played a role in the general acceptance of the wave theory of light. In 1838, Michael Faraday discovered cathode rays, Plancks hypothesis that energy is radiated and absorbed in discrete quanta precisely matched the observed patterns of black-body radiation. In 1896, Wilhelm Wien empirically determined a distribution law of black-body radiation, ludwig Boltzmann independently arrived at this result by considerations of Maxwells equations. However, it was only at high frequencies and underestimated the radiance at low frequencies. Later, Planck corrected this model using Boltzmanns statistical interpretation of thermodynamics and proposed what is now called Plancks law, following Max Plancks solution in 1900 to the black-body radiation problem, Albert Einstein offered a quantum-based theory to explain the photoelectric effect. Among the first to study quantum phenomena in nature were Arthur Compton, C. V. Raman, robert Andrews Millikan studied the photoelectric effect experimentally, and Albert Einstein developed a theory for it. In 1913, Peter Debye extended Niels Bohrs theory of structure, introducing elliptical orbits. This phase is known as old quantum theory, according to Planck, each energy element is proportional to its frequency, E = h ν, where h is Plancks constant. Planck cautiously insisted that this was simply an aspect of the processes of absorption and emission of radiation and had nothing to do with the reality of the radiation itself. In fact, he considered his quantum hypothesis a mathematical trick to get the right rather than a sizable discovery. He won the 1921 Nobel Prize in Physics for this work, Einstein further developed this idea to show that an electromagnetic wave such as light could also be described as a particle, with a discrete quantum of energy that was dependent on its frequency. The Copenhagen interpretation of Niels Bohr became widely accepted, in the mid-1920s, developments in quantum mechanics led to its becoming the standard formulation for atomic physics. In the summer of 1925, Bohr and Heisenberg published results that closed the old quantum theory, out of deference to their particle-like behavior in certain processes and measurements, light quanta came to be called photons. From Einsteins simple postulation was born a flurry of debating, theorizing, thus, the entire field of quantum physics emerged, leading to its wider acceptance at the Fifth Solvay Conference in 1927
Quantum mechanics
–
Max Planck is considered the father of the quantum theory.
Quantum mechanics
–
Solution to Schrödinger's equation for the hydrogen atom at different energy levels. The brighter areas represent a higher probability of finding an electron
Quantum mechanics
–
The 1927 Solvay Conference in Brussels.
11.
Theory of relativity
–
The theory of relativity usually encompasses two interrelated theories by Albert Einstein, special relativity and general relativity. Special relativity applies to particles and their interactions, describing all their physical phenomena except gravity. General relativity explains the law of gravitation and its relation to other forces of nature and it applies to the cosmological and astrophysical realm, including astronomy. The theory transformed theoretical physics and astronomy during the 20th century and it introduced concepts including spacetime as a unified entity of space and time, relativity of simultaneity, kinematic and gravitational time dilation, and length contraction. In the field of physics, relativity improved the science of elementary particles and their fundamental interactions, with relativity, cosmology and astrophysics predicted extraordinary astronomical phenomena such as neutron stars, black holes, and gravitational waves. Max Planck, Hermann Minkowski and others did subsequent work, Einstein developed general relativity between 1907 and 1915, with contributions by many others after 1915. The final form of general relativity was published in 1916, the term theory of relativity was based on the expression relative theory used in 1906 by Planck, who emphasized how the theory uses the principle of relativity. In the discussion section of the paper, Alfred Bucherer used for the first time the expression theory of relativity. By the 1920s, the community understood and accepted special relativity. It rapidly became a significant and necessary tool for theorists and experimentalists in the new fields of physics, nuclear physics. By comparison, general relativity did not appear to be as useful and it seemed to offer little potential for experimental test, as most of its assertions were on an astronomical scale. Its mathematics of general relativity seemed difficult and fully understandable only by a number of people. Around 1960, general relativity became central to physics and astronomy, new mathematical techniques to apply to general relativity streamlined calculations and made its concepts more easily visualized. Special relativity is a theory of the structure of spacetime and it was introduced in Einsteins 1905 paper On the Electrodynamics of Moving Bodies. Special relativity is based on two postulates which are contradictory in classical mechanics, The laws of physics are the same for all observers in motion relative to one another. The speed of light in a vacuum is the same for all observers, the resultant theory copes with experiment better than classical mechanics. For instance, postulate 2 explains the results of the Michelson–Morley experiment, moreover, the theory has many surprising and counterintuitive consequences. Some of these are, Relativity of simultaneity, Two events, simultaneous for one observer, time dilation, Moving clocks are measured to tick more slowly than an observers stationary clock
Theory of relativity
–
USSR stamp dedicated to Albert Einstein
Theory of relativity
–
Key concepts
12.
Engineering
–
The term Engineering is derived from the Latin ingenium, meaning cleverness and ingeniare, meaning to contrive, devise. Engineering has existed since ancient times as humans devised fundamental inventions such as the wedge, lever, wheel, each of these inventions is essentially consistent with the modern definition of engineering. The term engineering is derived from the engineer, which itself dates back to 1390 when an engineer originally referred to a constructor of military engines. In this context, now obsolete, a referred to a military machine. Notable examples of the obsolete usage which have survived to the present day are military engineering corps, the word engine itself is of even older origin, ultimately deriving from the Latin ingenium, meaning innate quality, especially mental power, hence a clever invention. The earliest civil engineer known by name is Imhotep, as one of the officials of the Pharaoh, Djosèr, he probably designed and supervised the construction of the Pyramid of Djoser at Saqqara in Egypt around 2630–2611 BC. Ancient Greece developed machines in both civilian and military domains, the Antikythera mechanism, the first known mechanical computer, and the mechanical inventions of Archimedes are examples of early mechanical engineering. In the Middle Ages, the trebuchet was developed, the first steam engine was built in 1698 by Thomas Savery. The development of this gave rise to the Industrial Revolution in the coming decades. With the rise of engineering as a profession in the 18th century, similarly, in addition to military and civil engineering, the fields then known as the mechanic arts became incorporated into engineering. The inventions of Thomas Newcomen and the Scottish engineer James Watt gave rise to mechanical engineering. The development of specialized machines and machine tools during the revolution led to the rapid growth of mechanical engineering both in its birthplace Britain and abroad. John Smeaton was the first self-proclaimed civil engineer and is regarded as the father of civil engineering. He was an English civil engineer responsible for the design of bridges, canals, harbours and he was also a capable mechanical engineer and an eminent physicist. Smeaton designed the third Eddystone Lighthouse where he pioneered the use of hydraulic lime and his lighthouse remained in use until 1877 and was dismantled and partially rebuilt at Plymouth Hoe where it is known as Smeatons Tower. The United States census of 1850 listed the occupation of engineer for the first time with a count of 2,000, there were fewer than 50 engineering graduates in the U. S. before 1865. In 1870 there were a dozen U. S. mechanical engineering graduates, in 1890 there were 6,000 engineers in civil, mining, mechanical and electrical. There was no chair of applied mechanism and applied mechanics established at Cambridge until 1875, the theoretical work of James Maxwell and Heinrich Hertz in the late 19th century gave rise to the field of electronics
Engineering
–
The steam engine, a major driver in the Industrial Revolution, underscores the importance of engineering in modern history. This beam engine is on display in the Technical University of Madrid.
Engineering
–
Relief map of the Citadel of Lille, designed in 1668 by Vauban, the foremost military engineer of his age.
Engineering
–
The Ancient Romans built aqueducts to bring a steady supply of clean fresh water to cities and towns in the empire.
Engineering
–
The International Space Station represents a modern engineering challenge from many disciplines.
13.
Three-dimensional space
–
Three-dimensional space is a geometric setting in which three values are required to determine the position of an element. This is the meaning of the term dimension. In physics and mathematics, a sequence of n numbers can be understood as a location in n-dimensional space, when n =3, the set of all such locations is called three-dimensional Euclidean space. It is commonly represented by the symbol ℝ3 and this serves as a three-parameter model of the physical universe in which all known matter exists. However, this space is one example of a large variety of spaces in three dimensions called 3-manifolds. Furthermore, in case, these three values can be labeled by any combination of three chosen from the terms width, height, depth, and breadth. In mathematics, analytic geometry describes every point in space by means of three coordinates. Three coordinate axes are given, each perpendicular to the two at the origin, the point at which they cross. They are usually labeled x, y, and z, below are images of the above-mentioned systems. Two distinct points determine a line. Three distinct points are either collinear or determine a unique plane, four distinct points can either be collinear, coplanar or determine the entire space. Two distinct lines can intersect, be parallel or be skew. Two parallel lines, or two intersecting lines, lie in a plane, so skew lines are lines that do not meet. Two distinct planes can either meet in a line or are parallel. Three distinct planes, no pair of which are parallel, can meet in a common line. In the last case, the three lines of intersection of each pair of planes are mutually parallel, a line can lie in a given plane, intersect that plane in a unique point or be parallel to the plane. In the last case, there will be lines in the plane that are parallel to the given line, a hyperplane is a subspace of one dimension less than the dimension of the full space. The hyperplanes of a space are the two-dimensional subspaces, that is
Three-dimensional space
–
Three-dimensional Cartesian coordinate system with the x -axis pointing towards the observer. (See diagram description for correction.)
14.
Standard basis
–
In mathematics, the standard basis for a Euclidean space is the set of unit vectors pointing in the direction of the axes of a Cartesian coordinate system. For example, the basis for the Euclidean plane is formed by vectors e x =, e y =. Here the vector ex points in the x direction, the vector ey points in the y direction, there are several common notations for these vectors, including, and. These vectors are written with a hat to emphasize their status as unit vectors. Each of these vectors is sometimes referred to as the versor of the corresponding Cartesian axis and these vectors are a basis in the sense that any other vector can be expressed uniquely as a linear combination of these. For example, every vector v in three-dimensional space can be written uniquely as v x e x + v y e y + v z e z, the scalars vx, vy, vz being the scalar components of the vector v. In n -dimensional Euclidean space, the standard consists of n distinct vectors. Standard bases can be defined for vector spaces, such as polynomials. In both cases, the standard consists of the elements of the vector space such that all coefficients but one are 0. For polynomials, the standard basis consists of the monomials and is commonly called monomial basis. For matrices M m × n, the standard consists of the m×n-matrices with exactly one non-zero entry. For example, the basis for 2×2 matrices is formed by the 4 matrices e 11 =, e 12 =, e 21 =, e 22 =. By definition, the basis is a sequence of orthogonal unit vectors. In other words, it is an ordered and orthonormal basis, however, an ordered orthonormal basis is not necessarily a standard basis. For instance the two vectors representing a 30° rotation of the 2D standard basis described above, i. e, there is a standard basis also for the ring of polynomials in n indeterminates over a field, namely the monomials. This family is the basis of the R-module R of all families f = from I into a ring R, which are zero except for a finite number of indices, if we interpret 1 as 1R. The existence of standard bases has become a topic of interest in algebraic geometry. It is now a part of theory called standard monomial theory
Standard basis
–
Every vector a in three dimensions is a linear combination of the standard basis vectors i, j, and k.
15.
Basis (linear algebra)
–
In more general terms, a basis is a linearly independent spanning set. Given a basis of a vector space V, every element of V can be expressed uniquely as a combination of basis vectors. A vector space can have distinct sets of basis vectors, however each such set has the same number of elements. A basis B of a vector space V over a field F is an independent subset of V that spans V. In more detail, suppose that B = is a subset of a vector space V over a field F. The numbers ai are called the coordinates of the vector x with respect to the basis B, a vector space that has a finite basis is called finite-dimensional. To deal with infinite-dimensional spaces, we must generalize the definition to include infinite basis sets. The sums in the definition are all finite because without additional structure the axioms of a vector space do not permit us to meaningfully speak about an infinite sum of vectors. Settings that permit infinite linear combinations allow alternative definitions of the basis concept and it is often convenient to list the basis vectors in a specific order, for example, when considering the transformation matrix of a linear map with respect to a basis. We then speak of a basis, which we define to be a sequence of linearly independent vectors that span V. B is a set of linearly independent vectors, i. e. it is a linearly independent set. Every vector in V can be expressed as a combination of vectors in B in a unique way. If the basis is ordered then the coefficients in this linear combination provide coordinates of the relative to the basis. Every vector space has a basis, the proof of this requires the axiom of choice. All bases of a vector space have the same cardinality, called the dimension of the vector space and this result is known as the dimension theorem, and requires the ultrafilter lemma, a strictly weaker form of the axiom of choice. Also many vector sets can be attributed a standard basis which comprises both spanning and linearly independent vectors, standard bases for example, In Rn, where ei is the ith column of the identity matrix. In P2, where P2 is the set of all polynomials of degree at most 2, is the standard basis. In M22, where M22 is the set of all 2×2 matrices. and Mm, n is the 2×2 matrix with a 1 in the m, n position, given a vector space V over a field F and suppose that and are two bases for V
Basis (linear algebra)
–
This picture illustrates the standard basis in R 2. The blue and orange vectors are the elements of the basis; the green vector can be given in terms of the basis vectors, and so is linearly dependent upon them.
16.
Fluid mechanics
–
Fluid mechanics is a branch of physics concerned with the mechanics of fluids and the forces on them. Fluid mechanics has a range of applications, including for mechanical engineering, civil engineering, chemical engineering, geophysics, astrophysics. Fluid mechanics can be divided into fluid statics, the study of fluids at rest, and fluid dynamics, fluid mechanics, especially fluid dynamics, is an active field of research with many problems that are partly or wholly unsolved. Fluid mechanics can be complex, and can best be solved by numerical methods. A modern discipline, called computational fluid dynamics, is devoted to this approach to solving fluid mechanics problems, Particle image velocimetry, an experimental method for visualizing and analyzing fluid flow, also takes advantage of the highly visual nature of fluid flow. Inviscid flow was further analyzed by mathematicians and viscous flow was explored by a multitude of engineers including Jean Léonard Marie Poiseuille. Fluid statics or hydrostatics is the branch of mechanics that studies fluids at rest. It embraces the study of the conditions under which fluids are at rest in stable equilibrium, and is contrasted with fluid dynamics, hydrostatics is fundamental to hydraulics, the engineering of equipment for storing, transporting and using fluids. It is also relevant to some aspect of geophysics and astrophysics, to meteorology, to medicine, fluid dynamics is a subdiscipline of fluid mechanics that deals with fluid flow—the science of liquids and gases in motion. The solution to a fluid dynamics problem typically involves calculating various properties of the fluid, such as velocity, pressure, density and it has several subdisciplines itself, including aerodynamics and hydrodynamics. Some fluid-dynamical principles are used in engineering and crowd dynamics. Fluid mechanics is a subdiscipline of continuum mechanics, as illustrated in the following table, in a mechanical view, a fluid is a substance that does not support shear stress, that is why a fluid at rest has the shape of its containing vessel. A fluid at rest has no shear stress, the assumptions inherent to a fluid mechanical treatment of a physical system can be expressed in terms of mathematical equations. This can be expressed as an equation in integral form over the control volume, the continuum assumption is an idealization of continuum mechanics under which fluids can be treated as continuous, even though, on a microscopic scale, they are composed of molecules. Fluid properties can vary continuously from one element to another and are average values of the molecular properties. The continuum hypothesis can lead to results in applications like supersonic speed flows. Those problems for which the continuum hypothesis fails, can be solved using statistical mechanics, to determine whether or not the continuum hypothesis applies, the Knudsen number, defined as the ratio of the molecular mean free path to the characteristic length scale, is evaluated. Problems with Knudsen numbers below 0.1 can be evaluated using the continuum hypothesis, the Navier–Stokes equations are differential equations that describe the force balance at a given point within a fluid
Fluid mechanics
–
Balance for some integrated fluid quantity in a control volume enclosed by a control surface.
17.
Continuum mechanics
–
Continuum mechanics is a branch of mechanics that deals with the analysis of the kinematics and the mechanical behavior of materials modeled as a continuous mass rather than as discrete particles. The French mathematician Augustin-Louis Cauchy was the first to formulate such models in the 19th century, research in the area continues till today. Modeling an object as a continuum assumes that the substance of the object completely fills the space it occupies, Continuum mechanics deals with physical properties of solids and fluids which are independent of any particular coordinate system in which they are observed. These physical properties are represented by tensors, which are mathematical objects that have the required property of being independent of coordinate system. These tensors can be expressed in coordinate systems for computational convenience, Materials, such as solids, liquids and gases, are composed of molecules separated by space. On a microscopic scale, materials have cracks and discontinuities, a continuum is a body that can be continually sub-divided into infinitesimal elements with properties being those of the bulk material. More specifically, the continuum hypothesis/assumption hinges on the concepts of an elementary volume. This condition provides a link between an experimentalists and a viewpoint on constitutive equations as well as a way of spatial and statistical averaging of the microstructure. The latter then provide a basis for stochastic finite elements. The levels of SVE and RVE link continuum mechanics to statistical mechanics, the RVE may be assessed only in a limited way via experimental testing, when the constitutive response becomes spatially homogeneous. Specifically for fluids, the Knudsen number is used to assess to what extent the approximation of continuity can be made, consider car traffic on a highway---with just one lane for simplicity. Somewhat surprisingly, and in a tribute to its effectiveness, continuum mechanics effectively models the movement of cars via a differential equation for the density of cars. The familiarity of this situation empowers us to understand a little of the continuum-discrete dichotomy underlying continuum modelling in general. To start modelling define that, x measure distance along the highway, t is time, ρ is the density of cars on the highway, cars do not appear and disappear. Consider any group of cars, from the car at the back of the group located at x = a to the particular car at the front located at x = b. The total number of cars in this group N = ∫ a b ρ d x, since cars are conserved d N / d t =0. The only way an integral can be zero for all intervals is if the integrand is zero for all x, consequently, conservation derives the first order nonlinear conservation PDE ∂ ρ ∂ t + ∂ ∂ x =0 for all positions on the highway. This conservation PDE applies not only to car traffic but also to fluids, solids, crowds, animals, plants, bushfires, financial traders and this PDE is one equation with two unknowns, so another equation is needed to form a well posed problem
Continuum mechanics
–
Figure 1. Configuration of a continuum body
18.
Covariance and contravariance of vectors
–
In multilinear algebra and tensor analysis, covariance and contravariance describe how the quantitative description of certain geometric or physical entities changes with a change of basis. In physics, a basis is sometimes thought of as a set of reference axes, a change of scale on the reference axes corresponds to a change of units in the problem. For instance, in changing scale from meters to centimeters, the components of a velocity vector will multiply by 100. Vectors exhibit this behavior of changing scale inversely to changes in scale to the reference axes, as a result, vectors often have units of distance or distance times some other unit. In contrast, dual vectors typically have units the inverse of distance or the inverse of distance times some other unit, an example of a dual vector is the gradient, which has units of a spatial derivative, or distance−1. The components of dual vectors change in the way as changes to scale of the reference axes. That is, the matrix that transforms the vector of components must be the inverse of the matrix that transforms the basis vectors, the components of vectors are said to be contravariant. In Einstein notation, contravariant components are denoted with upper indices as in v = v i e i, for a dual vector to be basis-independent, the components of the dual vector must co-vary with a change of basis to remain representing the same covector. That is, the components must be transformed by the matrix as the change of basis matrix. The components of vectors are said to be covariant. Examples of covariant vectors generally appear when taking a gradient of a function, in Einstein notation, covariant components are denoted with lower indices as in v = v i e i. Curvilinear coordinate systems, such as cylindrical or spherical coordinates, are used in physical. Tensors are objects in multilinear algebra that can have aspects of both covariance and contravariance, in physics, a vector typically arises as the outcome of a measurement or series of measurements, and is represented as a list of numbers such as. The numbers in the list depend on the choice of coordinate system, for a vector to represent a geometric object, it must be possible to describe how it looks in any other coordinate system. That is to say, the components of the vectors will transform in a way in passing from one coordinate system to another. A contravariant vector has components that transform as the coordinates do under changes of coordinates, including rotation and dilation. The vector itself does not change under these operations, instead, in other words, if the reference axes were rotated in one direction, the component representation of the vector would rotate in exactly the opposite way. Similarly, if the reference axes were stretched in one direction and this important requirement is what distinguishes a contravariant vector from any other triple of physically meaningful quantities
Covariance and contravariance of vectors
–
tangent basis vectors (yellow, left: e 1, e 2, e 3) to the coordinate curves (black),
19.
Del
–
Del, or nabla, is an operator used in mathematics, in particular, in vector calculus, as a vector differential operator, usually represented by the nabla symbol ∇. When applied to a function defined on a domain, it denotes its standard derivative as defined in calculus. When applied to a field, del may denote the gradient of a scalar field, strictly speaking, del is not a specific operator, but rather a convenient mathematical notation for those three operators, that makes many equations easier to write and remember. These formal products do not necessarily commute with other operators or products, del is used as a shorthand form to simplify many long mathematical expressions. It is most commonly used to simplify expressions for the gradient, divergence, curl, directional derivative, and Laplacian. In particular, if a hill is defined as a function over a plane h. The magnitude of the gradient is the value of this steepest slope, when operating on a vector it must be distributed to each component. The Laplacian is ubiquitous throughout modern mathematical physics, appearing for example in Laplaces equation, Poissons equation, the equation, the wave equation. Del can also be applied to a field with the result being a tensor. The tensor derivative of a vector field v → is a 9-term second-rank tensor – that is, a 3×3 matrix – but can be denoted simply as ∇ ⊗ v →, where ⊗ represents the dyadic product. This quantity is equivalent to the transpose of the Jacobian matrix of the field with respect to space. The divergence of the field can then be expressed as the trace of this matrix. Because of the diversity of vector products one application of del already gives rise to three major derivatives, the gradient, divergence, and curl and this is part of the value to be gained in notationally representing this operator as a vector. Though one can often replace del with a vector and obtain an identity, making those identities mnemonic. Whereas a vector is an object with both a magnitude and direction, del has neither a magnitude nor a direction until it operates on a function. For that reason, identities involving del must be derived with care, schey, H. M. Div, Grad, Curl, and All That, An Informal Text on Vector Calculus. Earliest Uses of Symbols of Calculus, NA Digest, Volume 98, Issue 03. A survey of the use of ∇ in vector analysis Tai
Del
–
DCG chart: A simple chart depicting all rules pertaining to second derivatives. D, C, G, L and CC stand for divergence, curl, gradient, Laplacian and curl of curl, respectively. Arrows indicate existence of second derivatives. Blue circle in the middle represents curl of curl, whereas the other two red circles (dashed) mean that DD and GG do not exist.
20.
Real number
–
In mathematics, a real number is a value that represents a quantity along a line. The adjective real in this context was introduced in the 17th century by René Descartes, the real numbers include all the rational numbers, such as the integer −5 and the fraction 4/3, and all the irrational numbers, such as √2. Included within the irrationals are the numbers, such as π. Real numbers can be thought of as points on a long line called the number line or real line. Any real number can be determined by a possibly infinite decimal representation, such as that of 8.632, the real line can be thought of as a part of the complex plane, and complex numbers include real numbers. These descriptions of the numbers are not sufficiently rigorous by the modern standards of pure mathematics. All these definitions satisfy the definition and are thus equivalent. The statement that there is no subset of the reals with cardinality greater than ℵ0. Simple fractions were used by the Egyptians around 1000 BC, the Vedic Sulba Sutras in, c.600 BC, around 500 BC, the Greek mathematicians led by Pythagoras realized the need for irrational numbers, in particular the irrationality of the square root of 2. Arabic mathematicians merged the concepts of number and magnitude into a general idea of real numbers. In the 16th century, Simon Stevin created the basis for modern decimal notation, in the 17th century, Descartes introduced the term real to describe roots of a polynomial, distinguishing them from imaginary ones. In the 18th and 19th centuries, there was work on irrational and transcendental numbers. Johann Heinrich Lambert gave the first flawed proof that π cannot be rational, Adrien-Marie Legendre completed the proof, Évariste Galois developed techniques for determining whether a given equation could be solved by radicals, which gave rise to the field of Galois theory. Charles Hermite first proved that e is transcendental, and Ferdinand von Lindemann, lindemanns proof was much simplified by Weierstrass, still further by David Hilbert, and has finally been made elementary by Adolf Hurwitz and Paul Gordan. The development of calculus in the 18th century used the set of real numbers without having defined them cleanly. The first rigorous definition was given by Georg Cantor in 1871, in 1874, he showed that the set of all real numbers is uncountably infinite but the set of all algebraic numbers is countably infinite. Contrary to widely held beliefs, his first method was not his famous diagonal argument, the real number system can be defined axiomatically up to an isomorphism, which is described hereafter. Another possibility is to start from some rigorous axiomatization of Euclidean geometry, from the structuralist point of view all these constructions are on equal footing
Real number
–
A symbol of the set of real numbers (ℝ)
21.
Set (mathematics)
–
In mathematics, a set is a well-defined collection of distinct objects, considered as an object in its own right. For example, the numbers 2,4, and 6 are distinct objects when considered separately, Sets are one of the most fundamental concepts in mathematics. Developed at the end of the 19th century, set theory is now a part of mathematics. In mathematics education, elementary topics such as Venn diagrams are taught at a young age, the German word Menge, rendered as set in English, was coined by Bernard Bolzano in his work The Paradoxes of the Infinite. A set is a collection of distinct objects. The objects that make up a set can be anything, numbers, people, letters of the alphabet, other sets, Sets are conventionally denoted with capital letters. Sets A and B are equal if and only if they have precisely the same elements. Cantors definition turned out to be inadequate, instead, the notion of a set is taken as a notion in axiomatic set theory. There are two ways of describing, or specifying the members of, a set, one way is by intensional definition, using a rule or semantic description, A is the set whose members are the first four positive integers. B is the set of colors of the French flag, the second way is by extension – that is, listing each member of the set. An extensional definition is denoted by enclosing the list of members in curly brackets, one often has the choice of specifying a set either intensionally or extensionally. In the examples above, for instance, A = C and B = D, there are two important points to note about sets. First, in a definition, a set member can be listed two or more times, for example. However, per extensionality, two definitions of sets which differ only in one of the definitions lists set members multiple times, define, in fact. Hence, the set is identical to the set. The second important point is that the order in which the elements of a set are listed is irrelevant and we can illustrate these two important points with an example, = =. For sets with many elements, the enumeration of members can be abbreviated, for instance, the set of the first thousand positive integers may be specified extensionally as, where the ellipsis indicates that the list continues in the obvious way. Ellipses may also be used where sets have infinitely many members, thus the set of positive even numbers can be written as
Set (mathematics)
–
A set of polygons in a Venn diagram
22.
Cartesian product
–
In Set theory, a Cartesian product is a mathematical operation that returns a set from multiple sets. That is, for sets A and B, the Cartesian product A × B is the set of all ordered pairs where a ∈ A and b ∈ B, products can be specified using set-builder notation, e. g. A table can be created by taking the Cartesian product of a set of rows, If the Cartesian product rows × columns is taken, the cells of the table contain ordered pairs of the form. More generally, a Cartesian product of n sets, also known as an n-fold Cartesian product, can be represented by an array of n dimensions, an ordered pair is a 2-tuple or couple. The Cartesian product is named after René Descartes, whose formulation of analytic geometry gave rise to the concept, an illustrative example is the standard 52-card deck. The standard playing card ranks form a 13-element set, the card suits form a four-element set. The Cartesian product of these sets returns a 52-element set consisting of 52 ordered pairs, Ranks × Suits returns a set of the form. Suits × Ranks returns a set of the form, both sets are distinct, even disjoint. The main historical example is the Cartesian plane in analytic geometry, usually, such a pairs first and second components are called its x and y coordinates, respectively, cf. picture. The set of all such pairs is thus assigned to the set of all points in the plane, a formal definition of the Cartesian product from set-theoretical principles follows from a definition of ordered pair. The most common definition of ordered pairs, the Kuratowski definition, is =, note that, under this definition, X × Y ⊆ P, where P represents the power set. Therefore, the existence of the Cartesian product of any two sets in ZFC follows from the axioms of pairing, union, power set, let A, B, C, and D be sets. × C ≠ A × If for example A =, then × A = ≠ = A ×, the Cartesian product behaves nicely with respect to intersections, cf. left picture. × = ∩ In most cases the above statement is not true if we replace intersection with union, cf. middle picture. Other properties related with subsets are, if A ⊆ B then A × C ⊆ B × C, the cardinality of a set is the number of elements of the set. For example, defining two sets, A = and B =, both set A and set B consist of two elements each. Their Cartesian product, written as A × B, results in a new set which has the following elements, each element of A is paired with each element of B. Each pair makes up one element of the output set, the number of values in each element of the resulting set is equal to the number of sets whose cartesian product is being taken,2 in this case
Cartesian product
–
Standard 52-card deck
Cartesian product
–
Cartesian product of the sets and
23.
Vector space
–
A vector space is a collection of objects called vectors, which may be added together and multiplied by numbers, called scalars in this context. Scalars are often taken to be numbers, but there are also vector spaces with scalar multiplication by complex numbers, rational numbers. The operations of addition and scalar multiplication must satisfy certain requirements, called axioms. Euclidean vectors are an example of a vector space and they represent physical quantities such as forces, any two forces can be added to yield a third, and the multiplication of a force vector by a real multiplier is another force vector. In the same vein, but in a more geometric sense, Vector spaces are the subject of linear algebra and are well characterized by their dimension, which, roughly speaking, specifies the number of independent directions in the space. Infinite-dimensional vector spaces arise naturally in mathematical analysis, as function spaces and these vector spaces are generally endowed with additional structure, which may be a topology, allowing the consideration of issues of proximity and continuity. Among these topologies, those that are defined by a norm or inner product are commonly used. This is particularly the case of Banach spaces and Hilbert spaces, historically, the first ideas leading to vector spaces can be traced back as far as the 17th centurys analytic geometry, matrices, systems of linear equations, and Euclidean vectors. Today, vector spaces are applied throughout mathematics, science and engineering, furthermore, vector spaces furnish an abstract, coordinate-free way of dealing with geometrical and physical objects such as tensors. This in turn allows the examination of local properties of manifolds by linearization techniques, Vector spaces may be generalized in several ways, leading to more advanced notions in geometry and abstract algebra. The concept of space will first be explained by describing two particular examples, The first example of a vector space consists of arrows in a fixed plane. This is used in physics to describe forces or velocities, given any two such arrows, v and w, the parallelogram spanned by these two arrows contains one diagonal arrow that starts at the origin, too. This new arrow is called the sum of the two arrows and is denoted v + w, when a is negative, av is defined as the arrow pointing in the opposite direction, instead. Such a pair is written as, the sum of two such pairs and multiplication of a pair with a number is defined as follows, + = and a =. The first example above reduces to one if the arrows are represented by the pair of Cartesian coordinates of their end points. A vector space over a field F is a set V together with two operations that satisfy the eight axioms listed below, elements of V are commonly called vectors. Elements of F are commonly called scalars, the second operation, called scalar multiplication takes any scalar a and any vector v and gives another vector av. In this article, vectors are represented in boldface to distinguish them from scalars
Vector space
–
Vector addition and scalar multiplication: a vector v (blue) is added to another vector w (red, upper illustration). Below, w is stretched by a factor of 2, yielding the sum v + 2 w.
24.
Coordinates
–
The order of the coordinates is significant, and they are sometimes identified by their position in an ordered tuple and sometimes by a letter, as in the x-coordinate. The coordinates are taken to be real numbers in elementary mathematics, the use of a coordinate system allows problems in geometry to be translated into problems about numbers and vice versa, this is the basis of analytic geometry. The simplest example of a system is the identification of points on a line with real numbers using the number line. In this system, an arbitrary point O is chosen on a given line. The coordinate of a point P is defined as the distance from O to P. Each point is given a unique coordinate and each number is the coordinate of a unique point. The prototypical example of a system is the Cartesian coordinate system. In the plane, two lines are chosen and the coordinates of a point are taken to be the signed distances to the lines. In three dimensions, three perpendicular planes are chosen and the three coordinates of a point are the distances to each of the planes. This can be generalized to create n coordinates for any point in n-dimensional Euclidean space, depending on the direction and order of the coordinate axis the system may be a right-hand or a left-hand system. This is one of many coordinate systems, another common coordinate system for the plane is the polar coordinate system. A point is chosen as the pole and a ray from this point is taken as the polar axis, for a given angle θ, there is a single line through the pole whose angle with the polar axis is θ. Then there is a point on this line whose signed distance from the origin is r for given number r. For a given pair of coordinates there is a single point, for example, and are all polar coordinates for the same point. The pole is represented by for any value of θ, there are two common methods for extending the polar coordinate system to three dimensions. In the cylindrical coordinate system, a z-coordinate with the meaning as in Cartesian coordinates is added to the r and θ polar coordinates giving a triple. Spherical coordinates take this a further by converting the pair of cylindrical coordinates to polar coordinates giving a triple. A point in the plane may be represented in coordinates by a triple where x/z and y/z are the Cartesian coordinates of the point
Coordinates
–
The spherical coordinate system is commonly used in physics. It assigns three numbers (known as coordinates) to every point in Euclidean space: radial distance r, polar angle θ (theta), and azimuthal angle φ (phi). The symbol ρ (rho) is often used instead of r.
25.
Orthogonal vector
–
The concept of orthogonality has been broadly generalized in mathematics, as well as in areas such as chemistry, and engineering. The word comes from the Greek ὀρθός, meaning upright, and γωνία, the ancient Greek ὀρθογώνιον orthogōnion and classical Latin orthogonium originally denoted a rectangle. Later, they came to mean a right triangle, in the 12th century, the post-classical Latin word orthogonalis came to mean a right angle or something related to a right angle. In geometry, two Euclidean vectors are orthogonal if they are perpendicular, i. e. they form a right angle, two vectors, x and y, in an inner product space, V, are orthogonal if their inner product ⟨ x, y ⟩ is zero. This relationship is denoted x ⊥ y, two vector subspaces, A and B, of an inner product space, V, are called orthogonal subspaces if each vector in A is orthogonal to each vector in B. The largest subspace of V that is orthogonal to a subspace is its orthogonal complement. Given a module M and its dual M∗, an element m′ of M∗, two sets S′ ⊆ M∗ and S ⊆ M are orthogonal if each element of S′ is orthogonal to each element of S. A term rewriting system is said to be if it is left-linear and is non-ambiguous. Orthogonal term rewriting systems are confluent, a set of vectors in an inner product space is called pairwise orthogonal if each pairing of them is orthogonal. Such a set is called an orthogonal set, nonzero pairwise orthogonal vectors are always linearly independent. In certain cases, the normal is used to mean orthogonal. For example, the y-axis is normal to the curve y = x2 at the origin, however, normal may also refer to the magnitude of a vector. In particular, a set is called if it is an orthogonal set of unit vectors. As a result, use of the normal to mean orthogonal is often avoided. The word normal also has a different meaning in probability and statistics, a vector space with a bilinear form generalizes the case of an inner product. When the bilinear form applied to two results in zero, then they are orthogonal. The case of a pseudo-Euclidean plane uses the term hyperbolic orthogonality, in the diagram, axes x′ and t′ are hyperbolic-orthogonal for any given ϕ. In 2-D or higher-dimensional Euclidean space, two vectors are orthogonal if and only if their dot product is zero, i. e. they make an angle of 90°, hence orthogonality of vectors is an extension of the concept of perpendicular vectors into higher-dimensional spaces
Orthogonal vector
–
The line segments AB and CD are orthogonal to each other.
26.
Scalar multiplication
–
In mathematics, scalar multiplication is one of the basic operations defining a vector space in linear algebra. In common geometrical contexts, scalar multiplication of a real Euclidean vector by a real number multiplies the magnitude of the vector without changing its direction. The term scalar itself derives from this usage, a scalar is that which scales vectors, scalar multiplication is the multiplication of a vector by a scalar, and must be distinguished from inner product of two vectors. In general, if K is a field and V is a space over K. The result of applying this function to c in K and v in V is denoted cv, here + is addition either in the field or in the vector space, as appropriate, and 0 is the additive identity in either. Juxtaposition indicates either scalar multiplication or the operation in the field. Scalar multiplication may be viewed as a binary operation or as an action of the field on the vector space. A geometric interpretation of scalar multiplication is that it stretches, or contracts, as a special case, V may be taken to be K itself and scalar multiplication may then be taken to be simply the multiplication in the field. When V is Kn, scalar multiplication is equivalent to multiplication of each component with the scalar, the same idea applies if K is a commutative ring and V is a module over K. K can even be a rig, but then there is no additive inverse. If K is not commutative, the operations left scalar multiplication cv. The left scalar multiplication of a matrix A with a scalar λ gives another matrix λA of the size as A. The entries of λA are defined by i j = λ i j, explicitly, similarly, the right scalar multiplication of a matrix A with a scalar λ is defined to be i j = i j λ, explicitly, A λ = λ =. When the underlying ring is commutative, for example, the real or complex number field, however, for matrices over a more general ring that are not commutative, such as the quaternions, they may not be equal. For a real scalar and matrix, λ =2, A =2 A =2 = = =2 = A2. For quaternion scalars and matrices, λ = i, A = i = = ≠ = = i, the non-commutativity of quaternion multiplication prevents the transition of changing ij = +k to ji = −k
Scalar multiplication
–
Scalar multiplication of a vector by a factor of 3 stretches the vector out.
27.
Atlas (topology)
–
In mathematics, particularly topology, one describes a manifold using an atlas. An atlas consists of individual charts that, roughly speaking, describe individual regions of the manifold, if the manifold is the surface of the Earth, then an atlas has its more common meaning. In general, the notion of atlas underlies the definition of a manifold and related structures such as vector bundles. The definition of an atlas depends on the notion of a chart, a chart for a topological space M is a homeomorphism φ from an open subset U of M to an open subset of a Euclidean space. The chart is traditionally recorded as the ordered pair, an atlas for a topological space M is a collection of charts on M such that ⋃ U α = M. If the codomain of each chart is the n-dimensional Euclidean space, a transition map provides a way of comparing two charts of an atlas. To make this comparison, we consider the composition of one chart with the inverse of the other and this composition is not well-defined unless we restrict both charts to the intersection of their domains of definition. To be more precise, suppose that and are two charts for a manifold M such that U α ∩ U β is non-empty. The transition map τ α, β, φ α → φ β is the map defined by τ α, β = φ β ∘ φ α −1. Note that since φ α and φ β are both homeomorphisms, the transition map τ α, β is also a homeomorphism, one often desires more structure on a manifold than simply the topological structure. For example, if one would like an unambiguous notion of differentiation of functions on a manifold, such a manifold is called differentiable. Given a differentiable manifold, one can define the notion of tangent vectors. If each transition function is a map, then the atlas is called a smooth atlas. Alternatively, one could require that the maps have only k continuous derivatives in which case the atlas is said to be C k. Very generally, if each transition function belongs to a pseudo-group G of homeomorphisms of Euclidean space, if the transition maps between charts of an atlas preserve a local trivialization, then the atlas defines the structure of a fibre bundle. Smooth atlas smooth frame Atlas by Rowland, Todd
Atlas (topology)
–
Two charts on a manifold
28.
Domain of a function
–
In mathematics, and more specifically in naive set theory, the domain of definition of a function is the set of input or argument values for which the function is defined. That is, the function provides an output or value for each member of the domain, conversely, the set of values the function takes on as output is termed the image of the function, which is sometimes also referred to as the range of the function. For instance, the domain of cosine is the set of all real numbers, if the domain of a function is a subset of the real numbers and the function is represented in a Cartesian coordinate system, then the domain is represented on the X-axis. Given a function f, X→Y, the set X is the domain of f, in the expression f, x is the argument and f is the value. One can think of an argument as a member of the domain that is chosen as an input to the function, the image of f is the set of all values assumed by f for all possible x, this is the set. The image of f can be the set as the codomain or it can be a proper subset of it. It is, in general, smaller than the codomain, it is the whole codomain if, a well-defined function must map every element of its domain to an element of its codomain. For example, the function f defined by f =1 / x has no value for f, thus, the set of all real numbers, R, cannot be its domain. In cases like this, the function is defined on R\ or the gap is plugged by explicitly defining f. If we extend the definition of f to f = {1 / x x ≠00 x =0 then f is defined for all real numbers, any function can be restricted to a subset of its domain. The restriction of g, A → B to S, where S ⊆ A, is written g |S, S → B. The natural domain of a function is the set of values for which the function is defined, typically within the reals. For instance the natural domain of square root is the non-negative reals when considered as a real number function, when considering a natural domain, the set of possible values of the function is typically called its range. There are two meanings in current mathematical usage for the notion of the domain of a partial function from X to Y, i. e. a function from a subset X of X to Y. Most mathematicians, including recursion theorists, use the domain of f for the set X of all values x such that f is defined. But some, particularly category theorists, consider the domain to be X, in category theory one deals with morphisms instead of functions. Morphisms are arrows from one object to another, the domain of any morphism is the object from which an arrow starts. In this context, many set theoretic ideas about domains must be abandoned or at least formulated more abstractly, for example, the notion of restricting a morphism to a subset of its domain must be modified
Domain of a function
–
Illustration showing f, a function from pink domain X to blue and yellow codomain Y. The smaller yellow oval inside Y is the image of f. Either the image or the codomain also sometimes is called the range of f.
29.
Jacobian matrix and determinant
–
In vector calculus, the Jacobian matrix is the matrix of all first-order partial derivatives of a vector-valued function. When the matrix is a matrix, both the matrix and its determinant are referred to as the Jacobian in literature. Suppose f, ℝn → ℝm is a function takes as input the vector x ∈ ℝn. Then the Jacobian matrix J of f is an m×n matrix, usually defined and arranged as follows, J = = or, component-wise and this matrix, whose entries are functions of x, is also denoted by Df, Jf, and ∂/∂. This linear map is thus the generalization of the notion of derivative. If m = n, the Jacobian matrix is a matrix, and its determinant. It carries important information about the behavior of f. In particular, the f has locally in the neighborhood of a point x an inverse function that is differentiable if. The Jacobian determinant also appears when changing the variables in multiple integrals, if m =1, f is a scalar field and the Jacobian matrix is reduced to a row vector of partial derivatives of f—i. e. the gradient of f. These concepts are named after the mathematician Carl Gustav Jacob Jacobi, the Jacobian generalizes the gradient of a scalar-valued function of multiple variables, which itself generalizes the derivative of a scalar-valued function of a single variable. In other words, the Jacobian for a multivariate function is the gradient. The Jacobian can also be thought of as describing the amount of stretching, rotating or transforming that a transformation imposes locally, for example, if = f is used to transform an image, the Jacobian Jf, describes how the image in the neighborhood of is transformed. If p is a point in ℝn and f is differentiable at p, compare this to a Taylor series for a scalar function of a scalar argument, truncated to first order, f = f + f ′ + o. The Jacobian of the gradient of a function of several variables has a special name, the Hessian matrix. If m=n, then f is a function from ℝn to itself and we can then form its determinant, known as the Jacobian determinant. The Jacobian determinant is occasionally referred to as the Jacobian, the Jacobian determinant at a given point gives important information about the behavior of f near that point. For instance, the differentiable function f is invertible near a point p ∈ ℝn if the Jacobian determinant at p is non-zero. This is the inverse function theorem, furthermore, if the Jacobian determinant at p is positive, then f preserves orientation near p, if it is negative, f reverses orientation
Jacobian matrix and determinant
–
A nonlinear map f: R 2 → R 2 sends a small square to a distorted parallelepiped close to the image of the square under the best linear approximation of f near the point.
30.
Space
–
Space is the boundless three-dimensional extent in which objects and events have relative position and direction. Physical space is conceived in three linear dimensions, although modern physicists usually consider it, with time, to be part of a boundless four-dimensional continuum known as spacetime. The concept of space is considered to be of importance to an understanding of the physical universe. However, disagreement continues between philosophers over whether it is itself an entity, a relationship between entities, or part of a conceptual framework. Many of these classical philosophical questions were discussed in the Renaissance and then reformulated in the 17th century, in Isaac Newtons view, space was absolute—in the sense that it existed permanently and independently of whether there was any matter in the space. Other natural philosophers, notably Gottfried Leibniz, thought instead that space was in fact a collection of relations between objects, given by their distance and direction from one another. In the 18th century, the philosopher and theologian George Berkeley attempted to refute the visibility of spatial depth in his Essay Towards a New Theory of Vision. Kant referred to the experience of space in his Critique of Pure Reason as being a pure a priori form of intuition. In the 19th and 20th centuries mathematicians began to examine geometries that are non-Euclidean, in space is conceived as curved. According to Albert Einsteins theory of relativity, space around gravitational fields deviates from Euclidean space. Experimental tests of general relativity have confirmed that non-Euclidean geometries provide a model for the shape of space. In the seventeenth century, the philosophy of space and time emerged as an issue in epistemology. At its heart, Gottfried Leibniz, the German philosopher-mathematician, and Isaac Newton, unoccupied regions are those that could have objects in them, and thus spatial relations with other places. For Leibniz, then, space was an abstraction from the relations between individual entities or their possible locations and therefore could not be continuous but must be discrete. Space could be thought of in a way to the relations between family members. Although people in the family are related to one another, the relations do not exist independently of the people, but since there would be no observational way of telling these universes apart then, according to the identity of indiscernibles, there would be no real difference between them. According to the principle of sufficient reason, any theory of space that implied that there could be two possible universes must therefore be wrong. Newton took space to be more than relations between objects and based his position on observation and experimentation
Space
–
Gottfried Leibniz
Space
–
A right-handed three-dimensional Cartesian coordinate system used to indicate positions in space.
Space
–
Isaac Newton
Space
–
Immanuel Kant
31.
Orthogonal
–
The concept of orthogonality has been broadly generalized in mathematics, as well as in areas such as chemistry, and engineering. The word comes from the Greek ὀρθός, meaning upright, and γωνία, the ancient Greek ὀρθογώνιον orthogōnion and classical Latin orthogonium originally denoted a rectangle. Later, they came to mean a right triangle, in the 12th century, the post-classical Latin word orthogonalis came to mean a right angle or something related to a right angle. In geometry, two Euclidean vectors are orthogonal if they are perpendicular, i. e. they form a right angle, two vectors, x and y, in an inner product space, V, are orthogonal if their inner product ⟨ x, y ⟩ is zero. This relationship is denoted x ⊥ y, two vector subspaces, A and B, of an inner product space, V, are called orthogonal subspaces if each vector in A is orthogonal to each vector in B. The largest subspace of V that is orthogonal to a subspace is its orthogonal complement. Given a module M and its dual M∗, an element m′ of M∗, two sets S′ ⊆ M∗ and S ⊆ M are orthogonal if each element of S′ is orthogonal to each element of S. A term rewriting system is said to be if it is left-linear and is non-ambiguous. Orthogonal term rewriting systems are confluent, a set of vectors in an inner product space is called pairwise orthogonal if each pairing of them is orthogonal. Such a set is called an orthogonal set, nonzero pairwise orthogonal vectors are always linearly independent. In certain cases, the normal is used to mean orthogonal. For example, the y-axis is normal to the curve y = x2 at the origin, however, normal may also refer to the magnitude of a vector. In particular, a set is called if it is an orthogonal set of unit vectors. As a result, use of the normal to mean orthogonal is often avoided. The word normal also has a different meaning in probability and statistics, a vector space with a bilinear form generalizes the case of an inner product. When the bilinear form applied to two results in zero, then they are orthogonal. The case of a pseudo-Euclidean plane uses the term hyperbolic orthogonality, in the diagram, axes x′ and t′ are hyperbolic-orthogonal for any given ϕ. In 2-D or higher-dimensional Euclidean space, two vectors are orthogonal if and only if their dot product is zero, i. e. they make an angle of 90°, hence orthogonality of vectors is an extension of the concept of perpendicular vectors into higher-dimensional spaces
Orthogonal
–
The line segments AB and CD are orthogonal to each other.
32.
Levi-Civita symbol
–
It is named after the Italian mathematician and physicist Tullio Levi-Civita. Other names include the symbol, antisymmetric symbol, or alternating symbol. The standard letters to denote the Levi-Civita symbol are the Greek lower case epsilon ε or ϵ, or less commonly the Latin lower case e. Index notation allows one to display permutations in a way compatible with tensor analysis, ε i 1 i 2 … i n where each index i1, i2, …, there are nn indexed values of εi1i2…in, which can be arranged into an n-dimensional array. The key definitive property of the symbol is total antisymmetry in all the indices, when any two indices are interchanged, equal or not, the symbol is negated, ε … i p … i q … = − ε … i q … i p …. If any two indices are equal, the symbol is zero, the value ε12…n must be defined, else the particular values of the symbol for all permutations are indeterminate. Most authors choose ε12…n = +1, which means the Levi-Civita symbol equals the sign of a permutation when the indices are all unequal and this choice is used throughout this article. The values of the Levi-Civita symbol are independent of any metric tensor, also, the specific term symbol emphasizes that it is not a tensor because of how it transforms between coordinate systems, however it can be interpreted as a tensor density. The Levi-Civita symbol allows the determinant of a matrix. The three- and higher-dimensional Levi-Civita symbols are used more commonly, in three dimensions only, the cyclic permutations of are all even permutations, similarly the anticyclic permutations are all odd permutations. This means in 3d it is sufficient to take cyclic or anticyclic permutations of and easily obtain all the even or odd permutations. Analogous to 2-dimensional matrices, the values of the 3-dimensional Levi-Civita symbol can be arranged into a 3 ×3 ×3 array, the formula is valid for all index values, and for any n. However, computing the formula above naively is O in time complexity, a tensor whose components in an orthonormal basis are given by the Levi-Civita symbol is sometimes called a permutation tensor. It is actually a pseudotensor because under a transformation of Jacobian determinant −1. As the Levi-Civita symbol is a pseudotensor, the result of taking a product is a pseudovector. Under a general coordinate change, the components of the permutation tensor are multiplied by the Jacobian of the transformation matrix. This implies that in coordinate frames different from the one in which the tensor was defined, if the frame is orthonormal, the factor will be ±1 depending on whether the orientation of the frame is the same or not. In index-free tensor notation, the Levi-Civita symbol is replaced by the concept of the Hodge dual, thus, one could write ε i j … k = ε i j … k
Levi-Civita symbol
–
For the indices (i, j, k) in ε ijk, the values 1, 2, 3 occurring in the cyclic order (1,2,3) (yellow) correspond to ε = +1, while occurring in the reverse cyclic order (red) correspond to ε = −1, otherwise ε = 0.
33.
Dot product
–
In mathematics, the dot product or scalar product is an algebraic operation that takes two equal-length sequences of numbers and returns a single number. Sometimes it is called inner product in the context of Euclidean space, algebraically, the dot product is the sum of the products of the corresponding entries of the two sequences of numbers. Geometrically, it is the product of the Euclidean magnitudes of the two vectors and the cosine of the angle between them, the dot product may be defined algebraically or geometrically. The geometric definition is based on the notions of angle and distance, the equivalence of these two definitions relies on having a Cartesian coordinate system for Euclidean space. In such a presentation, the notions of length and angles are not primitive, so the equivalence of the two definitions of the dot product is a part of the equivalence of the classical and the modern formulations of Euclidean geometry. For instance, in space, the dot product of vectors and is. In Euclidean space, a Euclidean vector is an object that possesses both a magnitude and a direction. A vector can be pictured as an arrow and its magnitude is its length, and its direction is the direction that the arrow points. The magnitude of a vector a is denoted by ∥ a ∥, the dot product of two Euclidean vectors a and b is defined by a ⋅ b = ∥ a ∥ ∥ b ∥ cos , where θ is the angle between a and b. In particular, if a and b are orthogonal, then the angle between them is 90° and a ⋅ b =0. The scalar projection of a Euclidean vector a in the direction of a Euclidean vector b is given by a b = ∥ a ∥ cos θ, where θ is the angle between a and b. In terms of the definition of the dot product, this can be rewritten a b = a ⋅ b ^. The dot product is thus characterized geometrically by a ⋅ b = a b ∥ b ∥ = b a ∥ a ∥. The dot product, defined in this manner, is homogeneous under scaling in each variable and it also satisfies a distributive law, meaning that a ⋅ = a ⋅ b + a ⋅ c. These properties may be summarized by saying that the dot product is a bilinear form, moreover, this bilinear form is positive definite, which means that a ⋅ a is never negative and is zero if and only if a =0. En are the basis vectors in Rn, then we may write a = = ∑ i a i e i b = = ∑ i b i e i. The vectors ei are a basis, which means that they have unit length and are at right angles to each other. Hence since these vectors have unit length e i ⋅ e i =1 and since they form right angles with each other, thus in general we can say that, e i ⋅ e j = δ i j
Dot product
–
Scalar projection
34.
Cross product
–
In mathematics and vector algebra, the cross product or vector product is a binary operation on two vectors in three-dimensional space and is denoted by the symbol ×. Given two linearly independent vectors a and b, the product, a × b, is a vector that is perpendicular to both a and b and therefore normal to the plane containing them. It has many applications in mathematics, physics, engineering, and it should not be confused with dot product. If two vectors have the direction or if either one has zero length, then their cross product is zero. The cross product is anticommutative and is distributive over addition, the space R3 together with the cross product is an algebra over the real numbers, which is neither commutative nor associative, but is a Lie algebra with the cross product being the Lie bracket. Like the dot product, it depends on the metric of Euclidean space, but if the product is limited to non-trivial binary products with vector results, it exists only in three and seven dimensions. If one adds the further requirement that the product be uniquely defined, the cross product of two vectors a and b is defined only in three-dimensional space and is denoted by a × b. In physics, sometimes the notation a ∧ b is used, if the vectors a and b are parallel, by the above formula, the cross product of a and b is the zero vector 0. Then, the n is coming out of the thumb. Using this rule implies that the cross-product is anti-commutative, i. e. b × a = −. By pointing the forefinger toward b first, and then pointing the finger toward a. Using the cross product requires the handedness of the system to be taken into account. If a left-handed coordinate system is used, the direction of the n is given by the left-hand rule. This, however, creates a problem because transforming from one arbitrary reference system to another, the problem is clarified by realizing that the cross product of two vectors is not a vector, but rather a pseudovector. See cross product and handedness for more detail, in 1881, Josiah Willard Gibbs, and independently Oliver Heaviside, introduced both the dot product and the cross product using a period and an x, respectively, to denote them. These alternative names are widely used in the literature. Both the cross notation and the cross product were possibly inspired by the fact that each scalar component of a × b is computed by multiplying non-corresponding components of a and b. Conversely, a dot product a ⋅ b involves multiplications between corresponding components of a and b, as explained below, the cross product can be expressed in the form of a determinant of a special 3 ×3 matrix
Cross product
–
The cross-product in respect to a right-handed coordinate system
35.
Permutation symbol
–
It is named after the Italian mathematician and physicist Tullio Levi-Civita. Other names include the symbol, antisymmetric symbol, or alternating symbol. The standard letters to denote the Levi-Civita symbol are the Greek lower case epsilon ε or ϵ, or less commonly the Latin lower case e. Index notation allows one to display permutations in a way compatible with tensor analysis, ε i 1 i 2 … i n where each index i1, i2, …, there are nn indexed values of εi1i2…in, which can be arranged into an n-dimensional array. The key definitive property of the symbol is total antisymmetry in all the indices, when any two indices are interchanged, equal or not, the symbol is negated, ε … i p … i q … = − ε … i q … i p …. If any two indices are equal, the symbol is zero, the value ε12…n must be defined, else the particular values of the symbol for all permutations are indeterminate. Most authors choose ε12…n = +1, which means the Levi-Civita symbol equals the sign of a permutation when the indices are all unequal and this choice is used throughout this article. The values of the Levi-Civita symbol are independent of any metric tensor, also, the specific term symbol emphasizes that it is not a tensor because of how it transforms between coordinate systems, however it can be interpreted as a tensor density. The Levi-Civita symbol allows the determinant of a matrix. The three- and higher-dimensional Levi-Civita symbols are used more commonly, in three dimensions only, the cyclic permutations of are all even permutations, similarly the anticyclic permutations are all odd permutations. This means in 3d it is sufficient to take cyclic or anticyclic permutations of and easily obtain all the even or odd permutations. Analogous to 2-dimensional matrices, the values of the 3-dimensional Levi-Civita symbol can be arranged into a 3 ×3 ×3 array, the formula is valid for all index values, and for any n. However, computing the formula above naively is O in time complexity, a tensor whose components in an orthonormal basis are given by the Levi-Civita symbol is sometimes called a permutation tensor. It is actually a pseudotensor because under a transformation of Jacobian determinant −1. As the Levi-Civita symbol is a pseudotensor, the result of taking a product is a pseudovector. Under a general coordinate change, the components of the permutation tensor are multiplied by the Jacobian of the transformation matrix. This implies that in coordinate frames different from the one in which the tensor was defined, if the frame is orthonormal, the factor will be ±1 depending on whether the orientation of the frame is the same or not. In index-free tensor notation, the Levi-Civita symbol is replaced by the concept of the Hodge dual, thus, one could write ε i j … k = ε i j … k
Permutation symbol
–
For the indices (i, j, k) in ε ijk, the values 1, 2, 3 occurring in the cyclic order (1,2,3) (yellow) correspond to ε = +1, while occurring in the reverse cyclic order (red) correspond to ε = −1, otherwise ε = 0.
36.
Curvilinear coordinates
–
In geometry, curvilinear coordinates are a coordinate system for Euclidean space in which the coordinate lines may be curved. These coordinates may be derived from a set of Cartesian coordinates by using a transformation that is locally invertible at each point and this means that one can convert a point given in a Cartesian coordinate system to its curvilinear coordinates and back. The name curvilinear coordinates, coined by the French mathematician Lamé, well-known examples of curvilinear coordinate systems in three-dimensional Euclidean space are Cartesian, cylindrical and spherical polar coordinates. A Cartesian coordinate surface in space is a coordinate plane. In the same space, the coordinate surface r =1 in spherical coordinates is the surface of a unit sphere. The formalism of curvilinear coordinates provides a unified and general description of the coordinate systems. Curvilinear coordinates are used to define the location or distribution of physical quantities which may be, for example, scalars, vectors. Such expressions then become valid for any curvilinear coordinate system, depending on the application, a curvilinear coordinate system may be simpler to use than the Cartesian coordinate system. For instance, a problem with spherical symmetry defined in R3 is usually easier to solve in spherical polar coordinates than in Cartesian coordinates. Equations with boundary conditions that follow coordinate surfaces for a particular coordinate system may be easier to solve in that system. One would for instance describe the motion of a particle in a box in Cartesian coordinates. Spherical coordinates are one of the most used curvilinear coordinate systems in fields as Earth sciences, cartography, and physics. A point P in 3d space can be defined using Cartesian coordinates, by r = x e x + y e y + z e z and it can also be defined by its curvilinear coordinates if this triplet of numbers defines a single point in an unambiguous way. The coordinate axes are determined by the tangents to the curves at the intersection of three surfaces. They are not in general fixed directions in space, which happens to be the case for simple Cartesian coordinates, and thus there is generally no natural global basis for curvilinear coordinates. Applying the same derivatives to the curvilinear system locally at point P defines the basis vectors. Such a basis, whose vectors change their direction and/or magnitude from point to point is called a local basis, all bases associated with curvilinear coordinates are necessarily local. Basis vectors that are the same at all points are global bases, note, for this article e is reserved for the standard basis and h or b is for the curvilinear basis
Curvilinear coordinates
–
Curvilinear, affine, and Cartesian coordinates in two-dimensional space
37.
Surface integral
–
In mathematics, a surface integral is a generalization of multiple integrals to integration over surfaces. It can be thought of as the double integral analog of the line integral, given a surface, one may integrate over its scalar fields, and vector fields. Surface integrals have applications in physics, particularly with the theories of classical electromagnetism, let such a parameterization be x, where varies in some region T in the plane. The surface integral can also be expressed in the equivalent form ∬ S f d Σ = ∬ T f g d s d t where g is the determinant of the first fundamental form of the mapping x. So that ∂ r ∂ x =, and ∂ r ∂ y =, one can recognize the vector in the second line above as the normal vector to the surface. Note that because of the presence of the product, the above formulas only work for surfaces embedded in three-dimensional space. This can be seen as integrating a Riemannian volume form on the parameterized surface, consider a vector field v on S, that is, for each x in S, v is a vector. The surface integral can be defined according to the definition of the surface integral of a scalar field. This applies for example in the expression of the field at some fixed point due to an electrically charged surface. Alternatively, if we integrate the normal component of the vector field, imagine that we have a fluid flowing through S, such that v determines the velocity of the fluid at x. The flux is defined as the quantity of flowing through S per unit time. This illustration implies that if the field is tangent to S at each point, then the flux is zero, because the fluid just flows in parallel to S. This also implies that if v does not just flow along S and we find the formula ∬ S v ⋅ d Σ = ∬ S d Σ = ∬ T ∥ ∥ d s d t = ∬ T v ⋅ d s d t. The cross product on the side of this expression is a surface normal determined by the parametrization. This formula defines the integral on the left and we may also interpret this as a special case of integrating 2-forms, where we identify the vector field with a 1-form, and then integrate its Hodge dual over the surface. The transformation of the forms are similar. Then, the integral of f on S is given by ∬ D d s d t where ∂ x ∂ s × ∂ x ∂ t = is the surface element normal to S. Let us note that the integral of this 2-form is the same as the surface integral of the vector field which has as components f x, f y and f z
Surface integral
–
The definition of surface integral relies on splitting the surface into small surface elements.
38.
Volume integral
–
In mathematics—in particular, in multivariable calculus—a volume integral refers to an integral over a 3-dimensional domain, that is, it is a special case of multiple integrals. Volume integrals are especially important in physics for many applications, for example and it can also mean a triple integral within a region D in R3 of a function f, and is usually written as, ∭ D f d x d y d z. A volume integral in cylindrical coordinates is ∭ D f ρ d ρ d φ d z, and this is rather trivial however, and a volume integral is far more powerful. Multiple integral, Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4 Weisstein, Eric W
Volume integral
–
Volume element in spherical coordinates
39.
Integration (mathematics)
–
In mathematics, an integral assigns numbers to functions in a way that can describe displacement, area, volume, and other concepts that arise by combining infinitesimal data. Integration is one of the two operations of calculus, with its inverse, differentiation, being the other. The area above the x-axis adds to the total and that below the x-axis subtracts from the total, roughly speaking, the operation of integration is the reverse of differentiation. For this reason, the integral may also refer to the related notion of the antiderivative. In this case, it is called an integral and is written. The integrals discussed in this article are those termed definite integrals, a rigorous mathematical definition of the integral was given by Bernhard Riemann. It is based on a procedure which approximates the area of a curvilinear region by breaking the region into thin vertical slabs. A line integral is defined for functions of two or three variables, and the interval of integration is replaced by a curve connecting two points on the plane or in the space. In a surface integral, the curve is replaced by a piece of a surface in the three-dimensional space and this method was further developed and employed by Archimedes in the 3rd century BC and used to calculate areas for parabolas and an approximation to the area of a circle. A similar method was developed in China around the 3rd century AD by Liu Hui. This method was used in the 5th century by Chinese father-and-son mathematicians Zu Chongzhi. The next significant advances in integral calculus did not begin to appear until the 17th century, further steps were made in the early 17th century by Barrow and Torricelli, who provided the first hints of a connection between integration and differentiation. Barrow provided the first proof of the theorem of calculus. Wallis generalized Cavalieris method, computing integrals of x to a power, including negative powers. The major advance in integration came in the 17th century with the independent discovery of the theorem of calculus by Newton. The theorem demonstrates a connection between integration and differentiation and this connection, combined with the comparative ease of differentiation, can be exploited to calculate integrals. In particular, the theorem of calculus allows one to solve a much broader class of problems. Equal in importance is the mathematical framework that both Newton and Leibniz developed
Integration (mathematics)
–
A definite integral of a function can be represented as the signed area of the region bounded by its graph.
40.
Manifold
–
In mathematics, a manifold is a topological space that locally resembles Euclidean space near each point. More precisely, each point of a manifold has a neighbourhood that is homeomorphic to the Euclidean space of dimension n. One-dimensional manifolds include lines and circles, but not figure eights, two-dimensional manifolds are also called surfaces. Although a manifold locally resembles Euclidean space, globally it may not, for example, the surface of the sphere is not a Euclidean space, but in a region it can be charted by means of map projections of the region into the Euclidean plane. When a region appears in two neighbouring charts, the two representations do not coincide exactly and a transformation is needed to pass from one to the other, Manifolds naturally arise as solution sets of systems of equations and as graphs of functions. One important class of manifolds is the class of differentiable manifolds and this differentiable structure allows calculus to be done on manifolds. A Riemannian metric on a manifold allows distances and angles to be measured, symplectic manifolds serve as the phase spaces in the Hamiltonian formalism of classical mechanics, while four-dimensional Lorentzian manifolds model spacetime in general relativity. After a line, the circle is the simplest example of a topological manifold, Topology ignores bending, so a small piece of a circle is treated exactly the same as a small piece of a line. Consider, for instance, the top part of the circle, x2 + y2 =1. Any point of this arc can be described by its x-coordinate. So, projection onto the first coordinate is a continuous, and invertible, mapping from the arc to the open interval. Such functions along with the regions they map are called charts. Similarly, there are charts for the bottom, left, and right parts of the circle, together, these parts cover the whole circle and the four charts form an atlas for the circle. The top and right charts, χtop and χright respectively, overlap in their domain, Each map this part into the interval, though differently. Let a be any number in, then, T = χ r i g h t = χ r i g h t =1 − a 2 Such a function is called a transition map. The top, bottom, left, and right charts show that the circle is a manifold, charts need not be geometric projections, and the number of charts is a matter of some choice. These two charts provide a second atlas for the circle, with t =1 s Each chart omits a single point, either for s or for t and it can be proved that it is not possible to cover the full circle with a single chart. Viewed using calculus, the transition function T is simply a function between open intervals, which gives a meaning to the statement that T is differentiable
Manifold
–
The surface of the Earth requires (at least) two charts to include every point. Here the globe is decomposed into charts around the North and South Poles.
Manifold
–
The real projective plane is a two-dimensional manifold that cannot be realized in three dimensions without self-intersection, shown here as Boy's surface.
41.
General relativity
–
General relativity is the geometric theory of gravitation published by Albert Einstein in 1915 and the current description of gravitation in modern physics. General relativity generalizes special relativity and Newtons law of gravitation, providing a unified description of gravity as a geometric property of space and time. In particular, the curvature of spacetime is directly related to the energy and momentum of whatever matter, the relation is specified by the Einstein field equations, a system of partial differential equations. Examples of such differences include gravitational time dilation, gravitational lensing, the redshift of light. The predictions of relativity have been confirmed in all observations. Although general relativity is not the only theory of gravity. Einsteins theory has important astrophysical implications, for example, it implies the existence of black holes—regions of space in which space and time are distorted in such a way that nothing, not even light, can escape—as an end-state for massive stars. The bending of light by gravity can lead to the phenomenon of gravitational lensing, General relativity also predicts the existence of gravitational waves, which have since been observed directly by physics collaboration LIGO. In addition, general relativity is the basis of current cosmological models of an expanding universe. Soon after publishing the special theory of relativity in 1905, Einstein started thinking about how to incorporate gravity into his new relativistic framework. In 1907, beginning with a thought experiment involving an observer in free fall. After numerous detours and false starts, his work culminated in the presentation to the Prussian Academy of Science in November 1915 of what are now known as the Einstein field equations. These equations specify how the geometry of space and time is influenced by whatever matter and radiation are present, the Einstein field equations are nonlinear and very difficult to solve. Einstein used approximation methods in working out initial predictions of the theory, but as early as 1916, the astrophysicist Karl Schwarzschild found the first non-trivial exact solution to the Einstein field equations, the Schwarzschild metric. This solution laid the groundwork for the description of the stages of gravitational collapse. In 1917, Einstein applied his theory to the universe as a whole, in line with contemporary thinking, he assumed a static universe, adding a new parameter to his original field equations—the cosmological constant—to match that observational presumption. By 1929, however, the work of Hubble and others had shown that our universe is expanding and this is readily described by the expanding cosmological solutions found by Friedmann in 1922, which do not require a cosmological constant. Lemaître used these solutions to formulate the earliest version of the Big Bang models, in which our universe has evolved from an extremely hot, Einstein later declared the cosmological constant the biggest blunder of his life
General relativity
–
A simulated black hole of 10 solar masses within the Milky Way, seen from a distance of 600 kilometers.
General relativity
–
Albert Einstein developed the theories of special and general relativity. Picture from 1921.
General relativity
–
Einstein cross: four images of the same astronomical object, produced by a gravitational lens
General relativity
–
Artist's impression of the space-borne gravitational wave detector LISA
42.
Metamaterials
–
A metamaterial is a material engineered to have a property that is not found in nature. They are made from assemblies of multiple elements fashioned from materials such as metals or plastics. The materials are arranged in repeating patterns, at scales that are smaller than the wavelengths of the phenomena they influence. Metamaterials derive their properties not from the properties of the base materials, appropriately designed metamaterials can affect waves of electromagnetic radiation or sound in a manner not observed in bulk materials. Those that exhibit a negative index of refraction for particular wavelengths have attracted significant research and these materials are known as negative-index metamaterials. Metamaterials offer the potential to create superlenses, such a lens could allow imaging below the diffraction limit that is the minimum resolution that can be achieved by conventional glass lenses. A form of invisibility was demonstrated using gradient-index materials, acoustic and seismic metamaterials are also research areas. Explorations of artificial materials for manipulating electromagnetic waves began at the end of the 19th century, some of the earliest structures that may be considered metamaterials were studied by Jagadish Chandra Bose, who in 1898 researched substances with chiral properties. Karl Ferdinand Lindman studied wave interaction with metallic helices as artificial chiral media in the twentieth century. Winston E. Kock developed materials that had characteristics to metamaterials in the late 1940s. In the 1950s and 1960s, artificial dielectrics were studied for lightweight microwave antennas, microwave radar absorbers were researched in the 1980s and 1990s as applications for artificial chiral media. Negative-index materials were first described theoretically by Victor Veselago in 1967 and he proved that such materials could transmit light. He showed that the phase velocity could be made anti-parallel to the direction of Poynting vector and this is contrary to wave propagation in naturally occurring materials. John Pendry was the first to identify a practical way to make a left-handed metamaterial, such a material allows an electromagnetic wave to convey energy against its phase velocity. Pendrys idea was that metallic wires aligned along the direction of a wave could provide negative permittivity, natural materials display negative permittivity, the challenge was achieving negative permeability. In 1999 Pendry demonstrated that a ring with its axis placed along the direction of wave propagation could do so. In the same paper, he showed that an array of wires. Pendry also proposed a related negative-permeability design, the Swiss roll, in 2000, Smith et al. reported the experimental demonstration of functioning electromagnetic metamaterials by horizontally stacking, periodically, split-ring resonators and thin wire structures
Metamaterials
–
Negative index metamaterial array configuration, which was constructed of copper split-ring resonators and wires mounted on interlocking sheets of fiberglass circuit board. The total array consists of 3 by 20×20 unit cells with overall dimensions of 10×100×100 mm.
43.
Chain rule
–
In calculus, the chain rule is a formula for computing the derivative of the composition of two or more functions. This can be more explicitly in terms of the variable. Let F = f ∘ g, or equivalently, F = f for all x, then one can also write F ′ = f ′ g ′. The chain rule may be written, in Leibnizs notation, in the following way. If a variable z depends on the y, which itself depends on the variable x, so that y and z are therefore dependent variables, then z, via the intermediate variable of y. The chain rule states, d z d x = d z d y ⋅ d y d x. In integration, the counterpart to the rule is the substitution rule. The chain rule seems to have first been used by Leibniz and he used it to calculate the derivative of a + b z + c z 2 as the composite of the square root function and the function a + b z + c z 2. He first mentioned it in a 1676 memoir, the common notation of chain rule is due to Leibniz. LHôpital uses the chain rule implicitly in his Analyse des infiniment petits, the chain rule does not appear in any of Leonhard Eulers analysis books, even though they were written over a hundred years after Leibnizs discovery. Suppose that a skydiver jumps from an aircraft, assume that t seconds after his jump, his height above sea level in meters is given by g =4000 −4. 9t2. One model for the pressure at a height h is f =101325 e−0. 0001h. These two equations can be differentiated and combined in ways to produce the following data, g′ = −9. 8t is the velocity of the skydiver at time t. F′ = −10. 1325e−0. 0001h is the rate of change in pressure with respect to height at the height h and is proportional to the buoyant force on the skydiver at h meters above sea level. Is the atmospheric pressure the skydiver experiences t seconds after his jump, ′ is the rate of change in atmospheric pressure with respect to time at t seconds after the skydivers jump and is proportional to the buoyant force on the skydiver at t seconds after his jump. The chain rule gives a method for computing ′ in terms of f′, while it is always possible to directly apply the definition of the derivative to compute the derivative of a composite function, this is usually very difficult. The utility of the rule is that it turns a complicated derivative into several easy derivatives. The chain rule states that, under conditions, ′ = f ′ ⋅ g ′
Chain rule
44.
Tangent plane
–
The elements of the tangent space are called tangent vectors at x. This is a generalization of the notion of a vector in a Euclidean space. The dimension of all the tangent spaces of a manifold is the same as that of the manifold. More generally, if a manifold is thought of as an embedded submanifold of Euclidean space one can picture a tangent space in this literal fashion. This was the approach to defining parallel transport, and used by Dirac. More strictly this defines a tangent space, distinct from the space of tangent vectors described by modern terminology. In algebraic geometry, in contrast, there is a definition of tangent space at a point P of a variety V. The points P at which the dimension is exactly that of V are called the non-singular points, for example, a curve that crosses itself doesnt have a unique tangent line at that point. The singular points of V are those where the test to be a manifold fails, once tangent spaces have been introduced, one can define vector fields, which are abstractions of the velocity field of particles moving on a manifold. A vector field attaches to every point of the manifold a vector from the tangent space at that point, all the tangent spaces can be glued together to form a new differentiable manifold of twice the dimension of the original manifold, called the tangent bundle of the manifold. The informal description above relies on a manifold being embedded in a vector space Rm. However, it is convenient to define the notion of tangent space based on the manifold itself. There are various equivalent ways of defining the tangent spaces of a manifold, while the definition via velocities of curves is intuitively the simplest, it is also the most cumbersome to work with. More elegant and abstract approaches are described below, in the embedded manifold picture, a tangent vector at a point x is thought of as the velocity of a curve passing through the point x. We can therefore take a tangent vector to be a class of curves passing through x while being tangent to each other at x. Suppose M is a Ck manifold and x is a point in M. Pick a chart φ, U → Rn, where U is an open subset of M containing x. Suppose two curves γ1, → M and γ2, → M with γ1 = γ2 = x are given such that φ ∘ γ1, then γ1 and γ2 are called equivalent at 0 if the ordinary derivatives of φ ∘ γ1 and φ ∘ γ2 at 0 coincide. This defines a relation on such curves, and the equivalence classes are known as the tangent vectors of M at x
Tangent plane
–
A pictorial representation of the tangent space of a single point, x, on a sphere. A vector in this tangent space can represent a possible velocity at x. After moving in that direction to another nearby point, one's velocity would then be given by a vector in the tangent space of that nearby point—a different tangent space, not shown.
45.
Tensor derivative (continuum mechanics)
–
The derivatives of scalars, vectors, and second-order tensors with respect to second-order tensors are of considerable use in continuum mechanics. These derivatives are used in the theories of nonlinear elasticity and plasticity, the directional derivative provides a systematic way of finding these derivatives. The definitions of directional derivatives for various situations are given below and it is assumed that the functions are sufficiently smooth that derivatives can be taken. Let f be a real valued function of the vector v, then the derivative of f with respect to v in the direction u is the vector defined as ∂ f ∂ v ⋅ u = D f = α =0 for all vectors u. Then the derivative of f with respect to v in the u is the second order tensor defined as ∂ f ∂ v ⋅ u = D f = α =0 for all vectors u. Then the derivative of f with respect to S in the direction T is the second order tensor defined as ∂ f ∂ S, T = D f = α =0 for all second order tensors T. If T is a field of order n >1 then the divergence of the field is a tensor of order n−1. Note, the Einstein summation convention of summing on repeated indices is used below, note, the Einstein summation convention of summing on repeated indices is used below. Consider a vector v and an arbitrary constant vector c. In index notation, the product is given by v × c = e i j k v j c k e i where e i j k is the permutation symbol. In an orthonormal basis, the components of A can be written as a matrix A, in that case, the right hand side corresponds the cofactors of the matrix. Then the derivative of this tensor with respect to a second order tensor A is given by ∂1 ∂ A, T =0, T =0 This is because 1 is independent of A, let A be a second order tensor. Then ∂ A ∂ A, T = α =0 = T = I, T Therefore, when F is equal to the identity tensor, we get the divergence theorem ∫ Ω ∇ G d Ω = ∫ Γ n ⊗ G d Γ. We can express the formula for integration by parts in Cartesian index notation as ∫ Ω F i j k. In index notation, ∫ Ω F i j G p j, p d Ω = ∫ Γ n p F i j G p j d Γ − ∫ Ω G p j F i j, p d Ω, tensor derivative Directional derivative Curvilinear coordinates Continuum mechanics
Tensor derivative (continuum mechanics)
–
Domain, its boundary and the outward unit normal
46.
Centrifugal force
–
In Newtonian mechanics, the centrifugal force is an inertial force directed away from the axis of rotation that appears to act on all objects when viewed in a rotating reference frame. When they are analyzed in a coordinate system. The term has also been used for the force that is a reaction to a centripetal force. The centrifugal force is an outward force apparent in a reference frame. All measurements of position and velocity must be relative to some frame of reference. An inertial frame of reference is one that is not accelerating, the use of an inertial frame of reference, which will be the case for all elementary calculations, is often not explicitly stated but may generally be assumed unless stated otherwise. In terms of a frame of reference, the centrifugal force does not exist. All calculations can be performed using only Newtons laws of motion, in its current usage the term centrifugal force has no meaning in an inertial frame. In an inertial frame, an object that has no acting on it travels in a straight line. When measurements are made with respect to a reference frame, however. If it is desired to apply Newtons laws in the frame, it is necessary to introduce new, fictitious. Consider a stone being whirled round on a string, in a horizontal plane, the only real force acting on the stone in the horizontal plane is the tension in the string. There are no forces acting on the stone so there is a net force on the stone in the horizontal plane. In an inertial frame of reference, were it not for this net force acting on the stone, in order to keep the stone moving in a circular path, this force, known as the centripetal force, must be continuously applied to the stone. As soon as it is removed the stone moves in a straight line, in this inertial frame, the concept of centrifugal force is not required as all motion can be properly described using only real forces and Newtons laws of motion. In a frame of reference rotating with the stone around the axis as the stone. However, the tension in the string is still acting on the stone, if Newtons laws were applied in their usual form, the stone would accelerate in the direction of the net applied force, towards the axis of rotation, which it does not do. With this new the net force on the stone is zero, with the addition of this extra inertial or fictitious force Newtons laws can be applied in the rotating frame as if it were an inertial frame
Centrifugal force
–
The interface of two immiscible liquids rotating around a vertical axis is an upward-opening circular paraboloid.
47.
Orthogonal coordinate system
–
In mathematics, orthogonal coordinates are defined as a set of d coordinates q = in which the coordinate surfaces all meet at right angles. A coordinate surface for a particular coordinate qk is the curve, surface, orthogonal coordinates are a special but extremely common case of curvilinear coordinates. The chief advantage of non-Cartesian coordinates is that they can be chosen to match the symmetry of the problem, the reason to prefer orthogonal coordinates instead of general curvilinear coordinates is simplicity, many complications arise when coordinates are not orthogonal. For example, in orthogonal coordinates many problems may be solved by separation of variables, separation of variables is a mathematical technique that converts a complex d-dimensional problem into d one-dimensional problems that can be solved in terms of known functions. Many equations can be reduced to Laplaces equation or the Helmholtz equation, Laplaces equation is separable in 13 orthogonal coordinate systems, and the Helmholtz equation is separable in 11 orthogonal coordinate systems. Orthogonal coordinates never have off-diagonal terms in their metric tensor and these scaling functions hi are used to calculate differential operators in the new coordinates, e. g. the gradient, the Laplacian, the divergence and the curl. A simple method for generating orthogonal coordinates systems in two dimensions is by a mapping of a standard two-dimensional grid of Cartesian coordinates. A complex number z = x + iy can be formed from the coordinates x and y. However, there are other orthogonal coordinate systems in three dimensions that cannot be obtained by projecting or rotating a system, such as the ellipsoidal coordinates. More general orthogonal coordinates may be obtained by starting with some necessary coordinate surfaces, in Cartesian coordinates, the basis vectors are fixed. What distinguishes orthogonal coordinates is that, though the basis vectors vary, note that the vectors are not necessarily of equal length. The useful functions known as factors of the coordinates are simply the lengths h i of the basis vectors e ^ i. The scale factors are sometimes called Lamé coefficients, but this terminology is best avoided since some more well known coefficients in linear elasticity carry the same name. Components in the basis are most common in applications for clarity of the quantities. The basis vectors shown above are covariant basis vectors, while a vector is an objective quantity, meaning its identity is independent of any coordinate system, the components of a vector depend on what basis the vector is represented in. Note that the summation symbols Σ and the range, indicating summation over all basis vectors, are often omitted. Vector addition and negation are done component-wise just as in Cartesian coordinates with no complication, extra considerations may be necessary for other vector operations. Note however, that all of these operations assume that two vectors in a field are bound to the same point
Orthogonal coordinate system
–
A conformal map acting on a rectangular grid. Note that the orthogonality of the curved grid is retained.
48.
Parabolic coordinates
–
Parabolic coordinates are a two-dimensional orthogonal coordinate system in which the coordinate lines are confocal parabolas. A three-dimensional version of parabolic coordinates is obtained by rotating the two-dimensional system about the axis of the parabolas. Parabolic coordinates have found applications, e. g. the treatment of the Stark effect. The foci of all these parabolae are located at the origin, the two-dimensional parabolic coordinates form the basis for two sets of three-dimensional orthogonal coordinates. The parabolic cylindrical coordinates are produced by projecting in the z -direction, rotation about the symmetry axis of the parabolae produces a set of confocal paraboloids, the coordinate system of tridimensional parabolic coordinates. Expressed in terms of coordinates, x = σ τ cos φ y = σ τ sin φ z =12 where the parabolae are now aligned with the z -axis. The foci of all these paraboloids are located at the origin, Parabolic cylindrical coordinates Orthogonal coordinate system Curvilinear coordinates Morse PM, Feshbach H. Methods of Theoretical Physics, Part I, the Mathematics of Physics and Chemistry. Mathematical Handbook for Scientists and Engineers, same as Morse & Feshbach, substituting uk for ξk. Field Theory Handbook, Including Coordinate Systems, Differential Equations, hazewinkel, Michiel, ed. Parabolic coordinates, Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4 MathWorld description of parabolic coordinates
Parabolic coordinates
–
Contents
49.
Elliptic coordinates
–
In geometry, the elliptic coordinate system is a two-dimensional orthogonal coordinate system in which the coordinate lines are confocal ellipses and hyperbolae. The two foci F1 and F2 are generally taken to be fixed at − a and + a, respectively, on the x -axis of the Cartesian coordinate system. The most common definition of coordinates is x = a cosh μ cos ν y = a sinh μ sin ν where μ is a nonnegative real number. On the complex plane, an equivalent relationship is x + i y = a cosh These definitions correspond to ellipses, in an orthogonal coordinate system the lengths of the basis vectors are known as scale factors. The scale factors for the coordinates are equal to h μ = h ν = a sinh 2 μ + sin 2 ν = a cosh 2 μ − cos 2 ν. Using the double argument identities for hyperbolic functions and trigonometric functions, other differential operators such as ∇ ⋅ F and ∇ × F can be expressed in the coordinates by substituting the scale factors into the general formulae found in orthogonal coordinates. An alternative and geometrically intuitive set of coordinates are sometimes used. Hence, the curves of constant σ are ellipses, whereas the curves of constant τ are hyperbolae, the coordinate τ must belong to the interval, whereas the σ coordinate must be greater than or equal to one. The coordinates have a relation to the distances to the foci F1 and F2. For any point in the plane, the sum d 1 + d 2 of its distances to the foci equals 2 a σ, thus, the distance to F1 is a, whereas the distance to F2 is a. A drawback of these coordinates is that the points with Cartesian coordinates and have the coordinates, so the conversion to Cartesian coordinates is not a function. The scale factors for the elliptic coordinates are h σ = a σ2 − τ2 σ2 −1 h τ = a σ2 − τ21 − τ2. Hence, the area element becomes d A = a 2 σ2 − τ2 d σ d τ. Other differential operators such as ∇ ⋅ F and ∇ × F can be expressed in the coordinates by substituting the scale factors into the general found in orthogonal coordinates. Elliptic coordinates form the basis for several sets of orthogonal coordinates. The elliptic cylindrical coordinates are produced by projecting in the z -direction, some traditional examples are solving systems such as electrons orbiting a molecule or planetary orbits that have an elliptical shape. The geometric properties of elliptic coordinates can also be useful, for concreteness, r, p and q could represent the momenta of a particle and its decomposition products, respectively, and the integrand might involve the kinetic energies of the products. Curvilinear coordinates Generalized coordinates Hazewinkel, Michiel, ed. Elliptic coordinates, Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4 Korn GA, mathematical Handbook for Scientists and Engineers, McGraw-Hill
Elliptic coordinates
–
Elliptic coordinate system
50.
Spherical coordinate system
–
It can be seen as the three-dimensional version of the polar coordinate system. The radial distance is called the radius or radial coordinate. The polar angle may be called colatitude, zenith angle, normal angle, the use of symbols and the order of the coordinates differs between sources. In both systems ρ is often used instead of r, other conventions are also used, so great care needs to be taken to check which one is being used. A number of different spherical coordinate systems following other conventions are used outside mathematics, in a geographical coordinate system positions are measured in latitude, longitude and height or altitude. There are a number of different celestial coordinate systems based on different fundamental planes, the polar angle is often replaced by the elevation angle measured from the reference plane. Elevation angle of zero is at the horizon, the spherical coordinate system generalises the two-dimensional polar coordinate system. It can also be extended to spaces and is then referred to as a hyperspherical coordinate system. To define a coordinate system, one must choose two orthogonal directions, the zenith and the azimuth reference, and an origin point in space. These choices determine a plane that contains the origin and is perpendicular to the zenith. The spherical coordinates of a point P are then defined as follows, the inclination is the angle between the zenith direction and the line segment OP. The azimuth is the angle measured from the azimuth reference direction to the orthogonal projection of the line segment OP on the reference plane. The sign of the azimuth is determined by choosing what is a sense of turning about the zenith. This choice is arbitrary, and is part of the coordinate systems definition, the elevation angle is 90 degrees minus the inclination angle. If the inclination is zero or 180 degrees, the azimuth is arbitrary, if the radius is zero, both azimuth and inclination are arbitrary. In linear algebra, the vector from the origin O to the point P is often called the vector of P. Several different conventions exist for representing the three coordinates, and for the order in which they should be written. The use of to denote radial distance, inclination, and azimuth, respectively, is common practice in physics, and is specified by ISO standard 80000-2,2009, and earlier in ISO 31-11
Spherical coordinate system
–
Spherical coordinates (r, θ, φ) as commonly used in physics: radial distance r, polar angle θ (theta), and azimuthal angle φ (phi). The symbol ρ (rho) is often used instead of r.
51.
Parabolic cylindrical coordinates
–
Hence, the coordinate surfaces are confocal parabolic cylinders. Parabolic cylindrical coordinates have found applications, e. g. the potential theory of edges. e. The foci of all these cylinders are located along the line defined by x = y =0. g. Laplaces equation or the Helmholtz equation, for which such coordinates allow a separation of variables, a typical example would be the electric field surrounding a flat semi-infinite conducting plate. Parabolic coordinates Orthogonal coordinate system Curvilinear coordinates Morse PM, Feshbach H, methods of Theoretical Physics, Part I. The Mathematics of Physics and Chemistry, mathematical Handbook for Scientists and Engineers. Same as Morse & Feshbach, substituting uk for ξk, field Theory Handbook, Including Coordinate Systems, Differential Equations, and Their Solutions. MathWorld description of parabolic cylindrical coordinates
Parabolic cylindrical coordinates
–
Coordinate surfaces of parabolic cylindrical coordinates. The red parabolic cylinder corresponds to σ=2, whereas the yellow parabolic cylinder corresponds to τ=1. The blue plane corresponds to z =2. These surfaces intersect at the point P (shown as a black sphere), which has Cartesian coordinates roughly (2, -1.5, 2).
52.
Oblate spheroidal coordinates
–
Thus, the two foci are transformed into a ring of radius a in the x-y plane. Oblate spheroidal coordinates can also be considered as a case of ellipsoidal coordinates in which the two largest semi-axes are equal in length. Oblate spheroidal coordinates are useful in solving partial differential equations when the boundary conditions are defined on an oblate spheroid or a hyperboloid of revolution. For example, they played an important role in the calculation of the Perrin friction factors, the azimuthal angle φ can fall anywhere on a full circle, between ±180°. These coordinates are favored over the alternatives below because they are not degenerate, the reverse is also true, except on the z-axis and the disk in the x-y plane inside the focal ring. An ellipse in the x-z plane has a major semiaxis of length a cosh μ along the x-axis, the foci of all the ellipses in the x-z plane are located on the x-axis at ±a. Geometrically, the angle ν corresponds to the angle of the asymptotes of the hyperbola, the foci of all the hyperbolae are likewise located on the x-axis at ±a. The coordinates may be calculated from the Cartesian coordinates as follows, another set of oblate spheroidal coordinates are sometimes used where ζ = sinh μ and ξ = sin ν. The curves of constant ζ are oblate spheroids and the curves of constant ξ are the hyperboloids of revolution, the coordinate ζ is restricted by 0 ≤ ζ < ∞ and ξ is restricted by −1 ≤ ξ <1. N will then be an integer, an alternative and geometrically intuitive set of oblate spheroidal coordinates are sometimes used, where σ = cosh μ and τ = cos ν. Therefore, the coordinate σ must be greater than or equal to one, thus, these coordinates are degenerate, two points in Cartesian coordinates map to one set of coordinates. For any point, the sum d 1 + d 2 of its distances to the focal ring equals 2 a σ, thus, the far distance to the focal ring is a, whereas the near distance is a. Methods of Theoretical Physics, Part I, uses ξ1 = a sinh μ, ξ2 = sin ν, and ξ3 = cos φ. Same as Morse & Feshbach, substituting uk for ξk, uses hybrid coordinates ξ = sinh μ, η = sin ν, and φ. Mathematical Handbook for Scientists and Engineers, Korn and Korn use the coordinates, but also introduce the degenerate coordinates. The Mathematics of Physics and Chemistry, like Korn and Korn, but uses colatitude θ = 90° - ν instead of latitude ν. Moon PH, Spencer DE. Field Theory Handbook, Including Coordinate Systems, Differential Equations, Moon and Spencer use the colatitude convention θ = 90° - ν, and rename φ as ψ. Landau LD, Lifshitz EM, Pitaevskii LP, treats the oblate spheroidal coordinates as a limiting case of the general ellipsoidal coordinates
Oblate spheroidal coordinates
53.
Prolate spheroidal coordinates
–
Rotation about the other axis produces oblate spheroidal coordinates. Prolate spheroidal coordinates can also be considered as a case of ellipsoidal coordinates in which the two smallest principal axes are equal in length. One example is solving for the wavefunction of an electron moving in the field of two positively charged nuclei, as in the hydrogen molecular ion, H2+. Another example is solving for the field generated by two small electrode tips. Other limiting cases include areas generated by a segment or a line with a missing segment. The azimuthal angle φ belongs to the interval, the distances from the foci located at = are r ± = x 2 + y 2 +2 = a. An alternative and geometrically intuitive set of prolate spheroidal coordinates are sometimes used, hence, the curves of constant σ are prolate spheroids, whereas the curves of constant τ are hyperboloids of revolution. The coordinate τ belongs to the interval, whereas the σ coordinate must be greater than or equal to one, the coordinates σ and τ have a simple relation to the distances to the foci F1 and F2. For any point in the plane, the sum d 1 + d 2 of its distances to the foci equals 2 a σ, thus, the distance to F1 is a, whereas the distance to F2 is a. Methods of Theoretical Physics, Part I, uses ξ1 = a cosh μ, ξ2 = sin ν, and ξ3 = cos φ. Same as Morse & Feshbach, substituting uk for ξk, uses coordinates ξ = cosh μ, η = sin ν, and φ. Mathematical Handbook for Scientists and Engineers, Korn and Korn use the coordinates, but also introduce the degenerate coordinates. The Mathematics of Physics and Chemistry, similar to Korn and Korn, but uses colatitude θ = 90° - ν instead of latitude ν. Moon PH, Spencer DE. Field Theory Handbook, Including Coordinate Systems, Differential Equations, Moon and Spencer use the colatitude convention θ = 90° − ν, and rename φ as ψ. Landau LD, Lifshitz EM, Pitaevskii LP, treats the prolate spheroidal coordinates as a limiting case of the general ellipsoidal coordinates. Uses coordinates that have the units of distance squared, mathWorld description of prolate spheroidal coordinates
Prolate spheroidal coordinates
–
The three coordinate surfaces of prolate spheroidal coordinates. The red prolate spheroid (stretched sphere) corresponds to μ=1, and the blue two-sheet hyperboloid corresponds to ν=45°. The yellow half-plane corresponds to φ=-60°, which is measured relative to the x -axis (highlighted in green). The black sphere represents the intersection point of the three surfaces, which has Cartesian coordinates of roughly (0.831, -1.439, 2.182).
54.
Elliptic cylindrical coordinates
–
Elliptic cylindrical coordinates are a three-dimensional orthogonal coordinate system that results from projecting the two-dimensional elliptic coordinate system in the perpendicular z -direction. Hence, the surfaces are prisms of confocal ellipses and hyperbolae. The two foci F1 and F2 are generally taken to be fixed at − a and + a, respectively and these definitions correspond to ellipses and hyperbolae. The scale factors for the cylindrical coordinates μ and ν are equal h μ = h ν = a sinh 2 μ + sin 2 ν whereas the remaining scale factor h z =1. An alternative and geometrically intuitive set of coordinates are sometimes used. Hence, the curves of constant σ are ellipses, whereas the curves of constant τ are hyperbolae, the coordinate τ must belong to the interval, whereas the σ coordinate must be greater than or equal to one. The coordinates have a relation to the distances to the foci F1 and F2. For any point in the plane, the sum d 1 + d 2 of its distances to the foci equals 2 a σ, thus, the distance to F1 is a, whereas the distance to F2 is a. A typical example would be the field surrounding a flat conducting plate of width 2 a. The three-dimensional wave equation, when expressed in cylindrical coordinates, may be solved by separation of variables. The geometric properties of elliptic coordinates can also be useful, for concreteness, r, p and q could represent the momenta of a particle and its decomposition products, respectively, and the integrand might involve the kinetic energies of the products. Methods of Theoretical Physics, Part I, the Mathematics of Physics and Chemistry. Mathematical Handbook for Scientists and Engineers, same as Morse & Feshbach, substituting uk for ξk. Field Theory Handbook, Including Coordinate Systems, Differential Equations, mathWorld description of elliptic cylindrical coordinates
Elliptic cylindrical coordinates
–
Coordinate surfaces of elliptic cylindrical coordinates. The yellow sheet is the prism of a half-hyperbola corresponding to ν=-45°, whereas the red tube is an elliptical prism corresponding to μ=1. The blue sheet corresponds to z =1. The three surfaces intersect at the point P (shown as a black sphere) with Cartesian coordinates roughly (2.182, -1.661, 1.0). The foci of the ellipse and hyperbola lie at x = ±2.0.
55.
Bispherical coordinates
–
Bispherical coordinates are a three-dimensional orthogonal coordinate system that results from rotating the two-dimensional bipolar coordinate system about the axis that connects the two foci. Thus, the two foci F1 and F2 in bipolar coordinates remain points in the coordinate system. The surfaces of constant τ are non-intersecting spheres of different radii +2 = a 2 sinh 2 τ that surround the foci, the centers of the constant- τ spheres lie along the z -axis, whereas the constant- σ tori are centered in the x y plane. The formulae for the transformation are, σ = arccos τ = arsinh ϕ = atan where R = x 2 + y 2 + z 2 and Q =2 −2. The classic applications of bispherical coordinates are in solving differential equations, e. g. Laplaces equation. However, the Helmholtz equation is not separable in bispherical coordinates, a typical example would be the electric field surrounding two conducting spheres of different radii. Methods of Theoretical Physics, Part I, mathematical Handbook for Scientists and Engineers. Field Theory Handbook, Including Coordinate Systems, Differential Equations, and Their Solutions
Bispherical coordinates
56.
Bipolar cylindrical coordinates
–
Bipolar cylindrical coordinates are a three-dimensional orthogonal coordinate system that results from projecting the two-dimensional bipolar coordinate system in the perpendicular z -direction. The two lines of foci F1 and F2 of the projected Apollonian circles are generally taken to be defined by x = − a and x = + a, respectively, in the Cartesian coordinate system. The term bipolar is often used to other curves having two singular points, such as ellipses, hyperbolas, and Cassini ovals. However, the bipolar coordinates is never used to describe coordinates associated with those curves. The surfaces of constant τ are non-intersecting cylinders of different radii y 2 +2 = a 2 sinh 2 τ that surround the focal lines, the focal lines and all these cylinders are parallel to the z -axis. In the z =0 plane, the centers of the constant- σ and constant- τ cylinders lie on the y and x axes, respectively. The scale factors for the bipolar coordinates σ and τ are equal h σ = h τ = a cosh τ − cos σ whereas the remaining scale factor h z =1, a typical example would be the electric field surrounding two parallel cylindrical conductors. The Mathematics of Physics and Chemistry, mathematical Handbook for Scientists and Engineers. Field Theory Handbook, Including Coordinate Systems, Differential Equations, mathWorld description of bipolar cylindrical coordinates
Bipolar cylindrical coordinates
–
Coordinate surfaces of the bipolar cylindrical coordinates. The yellow crescent corresponds to σ, whereas the red tube corresponds to τ and the blue plane corresponds to z =1. The three surfaces intersect at the point P (shown as a black sphere).
57.
Conical coordinates
–
Conical coordinates are a three-dimensional orthogonal coordinate system consisting of concentric spheres and by two families of perpendicular cones, aligned along the z- and x-axes, respectively. The conical coordinates are defined by x = r μ ν b c y = r b z = r c with the limitations on the coordinates ν2 < c 2 < μ2 < b 2. In this coordinate system, both Laplaces equation and the Helmholtz equation are separable, the scale factor for the radius r is one, as in spherical coordinates. The scale factors for the two coordinates are h μ = r μ2 − ν2 and h ν = r μ2 − ν2. An alternative set of coordinates have been derived ξ = r cos ψ = r sin ζ = θ. The corresponding inverse relations are r = ξ2 + ψ2 ϕ =1 sin ζ arctan θ = ζ. If the path between any two points is constrained to surface of the cone given by ζ = π4 then the distance between any two points and is s 122 =2 +2. Methods of Theoretical Physics, Part I, the Mathematics of Physics and Chemistry. Mathematical Handbook for Scientists and Engineers, arfken G. Mathematical Methods for Physicists. Field Theory Handbook, Including Coordinate Systems, Differential Equations, and Their Solutions
Conical coordinates