1.
Tensor analysis
–
In mathematics, tensors are geometric objects that describe linear relations between geometric vectors, scalars, and other tensors. Elementary examples of such include the dot product, the cross product. Geometric vectors, often used in physics and engineering applications, given a coordinate basis or fixed frame of reference, a tensor can be represented as an organized multidimensional array of numerical values. The order of a tensor is the dimensionality of the array needed to represent it, or equivalently, for example, a linear map is represented by a matrix in a basis, and therefore is a 2nd-order tensor. A vector is represented as a 1-dimensional array in a basis, scalars are single numbers and are thus 0th-order tensors. Because they express a relationship between vectors, tensors themselves must be independent of a choice of coordinate system. The precise form of the transformation law determines the type of the tensor, the tensor type is a pair of natural numbers, where n is the number of contravariant indices and m is the number of covariant indices. The total order of a tensor is the sum of two numbers. The concept enabled an alternative formulation of the differential geometry of a manifold in the form of the Riemann curvature tensor. There are several approaches to defining tensors, although seemingly different, the approaches just describe the same geometric concept using different languages and at different levels of abstraction. For example, an operator is represented in a basis as a two-dimensional square n × n array. The numbers in the array are known as the scalar components of the tensor or simply its components. They are denoted by giving their position in the array, as subscripts and superscripts. For example, the components of an order 2 tensor T could be denoted Tij , whether an index is displayed as a superscript or subscript depends on the transformation properties of the tensor, described below. The total number of required to identify each component uniquely is equal to the dimension of the array. However, the term generally has another meaning in the context of matrices. Just as the components of a change when we change the basis of the vector space. Each tensor comes equipped with a law that details how the components of the tensor respond to a change of basis

2.
Tensor
–
In mathematics, tensors are geometric objects that describe linear relations between geometric vectors, scalars, and other tensors. Elementary examples of such include the dot product, the cross product. Geometric vectors, often used in physics and engineering applications, given a coordinate basis or fixed frame of reference, a tensor can be represented as an organized multidimensional array of numerical values. The order of a tensor is the dimensionality of the array needed to represent it, or equivalently, for example, a linear map is represented by a matrix in a basis, and therefore is a 2nd-order tensor. A vector is represented as a 1-dimensional array in a basis, scalars are single numbers and are thus 0th-order tensors. Because they express a relationship between vectors, tensors themselves must be independent of a choice of coordinate system. The precise form of the transformation law determines the type of the tensor, the tensor type is a pair of natural numbers, where n is the number of contravariant indices and m is the number of covariant indices. The total order of a tensor is the sum of two numbers. The concept enabled an alternative formulation of the differential geometry of a manifold in the form of the Riemann curvature tensor. There are several approaches to defining tensors, although seemingly different, the approaches just describe the same geometric concept using different languages and at different levels of abstraction. For example, an operator is represented in a basis as a two-dimensional square n × n array. The numbers in the array are known as the scalar components of the tensor or simply its components. They are denoted by giving their position in the array, as subscripts and superscripts. For example, the components of an order 2 tensor T could be denoted Tij , whether an index is displayed as a superscript or subscript depends on the transformation properties of the tensor, described below. The total number of required to identify each component uniquely is equal to the dimension of the array. However, the term generally has another meaning in the context of matrices. Just as the components of a change when we change the basis of the vector space. Each tensor comes equipped with a law that details how the components of the tensor respond to a change of basis

3.
Covariance and contravariance of vectors
–
In multilinear algebra and tensor analysis, covariance and contravariance describe how the quantitative description of certain geometric or physical entities changes with a change of basis. In physics, a basis is sometimes thought of as a set of reference axes, a change of scale on the reference axes corresponds to a change of units in the problem. For instance, in changing scale from meters to centimeters, the components of a velocity vector will multiply by 100. Vectors exhibit this behavior of changing scale inversely to changes in scale to the reference axes, as a result, vectors often have units of distance or distance times some other unit. In contrast, dual vectors typically have units the inverse of distance or the inverse of distance times some other unit, an example of a dual vector is the gradient, which has units of a spatial derivative, or distance−1. The components of dual vectors change in the way as changes to scale of the reference axes. That is, the matrix that transforms the vector of components must be the inverse of the matrix that transforms the basis vectors, the components of vectors are said to be contravariant. In Einstein notation, contravariant components are denoted with upper indices as in v = v i e i, for a dual vector to be basis-independent, the components of the dual vector must co-vary with a change of basis to remain representing the same covector. That is, the components must be transformed by the matrix as the change of basis matrix. The components of vectors are said to be covariant. Examples of covariant vectors generally appear when taking a gradient of a function, in Einstein notation, covariant components are denoted with lower indices as in v = v i e i. Curvilinear coordinate systems, such as cylindrical or spherical coordinates, are used in physical. Tensors are objects in multilinear algebra that can have aspects of both covariance and contravariance, in physics, a vector typically arises as the outcome of a measurement or series of measurements, and is represented as a list of numbers such as. The numbers in the list depend on the choice of coordinate system, for a vector to represent a geometric object, it must be possible to describe how it looks in any other coordinate system. That is to say, the components of the vectors will transform in a way in passing from one coordinate system to another. A contravariant vector has components that transform as the coordinates do under changes of coordinates, including rotation and dilation. The vector itself does not change under these operations, instead, in other words, if the reference axes were rotated in one direction, the component representation of the vector would rotate in exactly the opposite way. Similarly, if the reference axes were stretched in one direction and this important requirement is what distinguishes a contravariant vector from any other triple of physically meaningful quantities

4.
Linear map
–
In mathematics, a linear map is a mapping V → W between two modules that preserves the operations of addition and scalar multiplication. An important special case is when V = W, in case the map is called a linear operator, or an endomorphism of V. Sometimes the term linear function has the meaning as linear map. A linear map always maps linear subspaces onto linear subspaces, for instance it maps a plane through the origin to a plane, Linear maps can often be represented as matrices, and simple examples include rotation and reflection linear transformations. In the language of algebra, a linear map is a module homomorphism. In the language of category theory it is a morphism in the category of modules over a given ring, let V and W be vector spaces over the same field K. e. that for any vectors x1. Am ∈ K, the equality holds, f = a 1 f + ⋯ + a m f. It is then necessary to specify which of these fields is being used in the definition of linear. If V and W are considered as spaces over the field K as above, for example, the conjugation of complex numbers is an R-linear map C → C, but it is not C-linear. A linear map from V to K is called a linear functional and these statements generalize to any left-module RM over a ring R without modification, and to any right-module upon reversing of the scalar multiplication. The zero map between two left-modules over the ring is always linear. The identity map on any module is a linear operator, any homothecy centered in the origin of a vector space, v ↦ c v where c is a scalar, is a linear operator. This does not hold in general for modules, where such a map might only be semilinear, for real numbers, the map x ↦ x2 is not linear. Conversely, any map between finite-dimensional vector spaces can be represented in this manner, see the following section. Differentiation defines a map from the space of all differentiable functions to the space of all functions. It also defines an operator on the space of all smooth functions. If V and W are finite-dimensional vector spaces over a field F, then functions that send linear maps f, V → W to dimF × dimF matrices in the way described in the sequel are themselves linear maps. The expected value of a variable is linear, as for random variables X and Y we have E = E + E and E = aE

5.
One-form
–
In linear algebra, a one-form on a vector space is the same as a linear functional on the space. The usage of one-form in this context usually distinguishes the one-forms from higher-degree multilinear functionals on the space, in differential geometry, a one-form on a differentiable manifold is a smooth section of the cotangent bundle. Equivalently, a one-form on a manifold M is a mapping of the total space of the tangent bundle of M to R whose restriction to each fibre is a linear functional on the tangent space. Symbolically, α, T M → R, α x = α | T x M, T x M → R where αx is linear, often one-forms are described locally, particularly in local coordinates. From this perspective, a one-form has a covariant transformation law on passing from one system to another. Thus a one-form is an order 1 covariant tensor field, many real-world concepts can be described as one-forms, Indexing into a vector, The second element of a three-vector is given by the one-form. That is, the element of is · = y. Mean, The mean element of an n-vector is given by the one-form and that is, mean = ⋅ v. Sampling, Sampling with a kernel can be considered a one-form, where the one-form is the kernel shifted to the appropriate location. Net present value of a net flow, R, is given by the one-form w, = −t where i is the discount rate. That is, N P V = ⟨ w, R ⟩ = ∫ t =0 ∞ R t d t, the most basic non-trivial differential one-form is the change in angle form d θ. This is defined as the derivative of the angle function θ, integrating this derivative along a path gives the total change in angle over the path, and integrating over a closed loop gives the winding number. In the language of geometry, this derivative is a one-form, and it is closed but not exact. This is the most basic example of such a form, let U ⊆ R be open, and consider a differentiable function f, U → R, with derivative f. The differential df of f, at a point x 0 ∈ U, is defined as a linear map of the variable dx. Specifically, d f, d x ↦ f ′ d x, hence the map x ↦ d f sends each point x to a linear functional df. This is the simplest example of a differential form, in terms of the de Rham complex, one has an assignment from zero-forms to one-forms i. e. f ↦ d f. Two-form Reciprocal lattice Tensor Inner product

6.
Euclidean vector
–
In mathematics, physics, and engineering, a Euclidean vector is a geometric object that has magnitude and direction. Vectors can be added to other vectors according to vector algebra, a Euclidean vector is frequently represented by a line segment with a definite direction, or graphically as an arrow, connecting an initial point A with a terminal point B, and denoted by A B →. A vector is what is needed to carry the point A to the point B and it was first used by 18th century astronomers investigating planet rotation around the Sun. The magnitude of the vector is the distance between the two points and the direction refers to the direction of displacement from A to B. These operations and associated laws qualify Euclidean vectors as an example of the more generalized concept of vectors defined simply as elements of a vector space. Vectors play an important role in physics, the velocity and acceleration of a moving object, many other physical quantities can be usefully thought of as vectors. Although most of them do not represent distances, their magnitude and direction can still be represented by the length, the mathematical representation of a physical vector depends on the coordinate system used to describe it. Other vector-like objects that describe physical quantities and transform in a similar way under changes of the system include pseudovectors and tensors. The concept of vector, as we know it today, evolved gradually over a period of more than 200 years, about a dozen people made significant contributions. Giusto Bellavitis abstracted the basic idea in 1835 when he established the concept of equipollence, working in a Euclidean plane, he made equipollent any pair of line segments of the same length and orientation. Essentially he realized an equivalence relation on the pairs of points in the plane, the term vector was introduced by William Rowan Hamilton as part of a quaternion, which is a sum q = s + v of a Real number s and a 3-dimensional vector. Like Bellavitis, Hamilton viewed vectors as representative of classes of equipollent directed segments, grassmanns work was largely neglected until the 1870s. Peter Guthrie Tait carried the standard after Hamilton. His 1867 Elementary Treatise of Quaternions included extensive treatment of the nabla or del operator ∇, in 1878 Elements of Dynamic was published by William Kingdon Clifford. Clifford simplified the quaternion study by isolating the dot product and cross product of two vectors from the complete quaternion product and this approach made vector calculations available to engineers and others working in three dimensions and skeptical of the fourth. Josiah Willard Gibbs, who was exposed to quaternions through James Clerk Maxwells Treatise on Electricity and Magnetism, the first half of Gibbss Elements of Vector Analysis, published in 1881, presents what is essentially the modern system of vector analysis. In 1901 Edwin Bidwell Wilson published Vector Analysis, adapted from Gibbs lectures, in physics and engineering, a vector is typically regarded as a geometric entity characterized by a magnitude and a direction. It is formally defined as a line segment, or arrow

7.
Scalar (mathematics)
–
A scalar is an element of a field which is used to define a vector space. A quantity described by multiple scalars, such as having both direction and magnitude, is called a vector, more generally, a vector space may be defined by using any field instead of real numbers, such as complex numbers. Then the scalars of that space will be the elements of the associated field. A scalar product operation – not to be confused with scalar multiplication – may be defined on a vector space, a vector space equipped with a scalar product is called an inner product space. The real component of a quaternion is also called its scalar part, the term is also sometimes used informally to mean a vector, matrix, tensor, or other usually compound value that is actually reduced to a single component. Thus, for example, the product of a 1×n matrix and an n×1 matrix, the term scalar matrix is used to denote a matrix of the form kI where k is a scalar and I is the identity matrix. The word scalar derives from the Latin word scalaris, a form of scala. The English word scale also comes from scala, according to a citation in the Oxford English Dictionary the first recorded usage of the term scalar in English came with W. R. A vector space is defined as a set of vectors, a set of scalars, and a multiplication operation that takes a scalar k. For example, in a space, the scalar multiplication k yields. In a function space, kƒ is the function x ↦ k, the scalars can be taken from any field, including the rational, algebraic, real, and complex numbers, as well as finite fields. According to a theorem of linear algebra, every vector space has a basis. It follows that every vector space over a scalar field K is isomorphic to a vector space where the coordinates are elements of K. For example, every vector space of dimension n is isomorphic to n-dimensional real space Rn. Alternatively, a vector space V can be equipped with a function that assigns to every vector v in V a scalar ||v||. By definition. If ||v|| is interpreted as the length of v, this operation can be described as scaling the length of v by k, a vector space equipped with a norm is called a normed vector space. The norm is defined to be an element of Vs scalar field K. Moreover, if V has dimension 2 or more, K must be closed under square root, as well as the four operations, thus the rational numbers Q are excluded