1.
Tensor analysis
–
In mathematics, tensors are geometric objects that describe linear relations between geometric vectors, scalars, and other tensors. Elementary examples of such include the dot product, the cross product. Geometric vectors, often used in physics and engineering applications, given a coordinate basis or fixed frame of reference, a tensor can be represented as an organized multidimensional array of numerical values. The order of a tensor is the dimensionality of the array needed to represent it, or equivalently, for example, a linear map is represented by a matrix in a basis, and therefore is a 2nd-order tensor. A vector is represented as a 1-dimensional array in a basis, scalars are single numbers and are thus 0th-order tensors. Because they express a relationship between vectors, tensors themselves must be independent of a choice of coordinate system. The precise form of the transformation law determines the type of the tensor, the tensor type is a pair of natural numbers, where n is the number of contravariant indices and m is the number of covariant indices. The total order of a tensor is the sum of two numbers. The concept enabled an alternative formulation of the differential geometry of a manifold in the form of the Riemann curvature tensor. There are several approaches to defining tensors, although seemingly different, the approaches just describe the same geometric concept using different languages and at different levels of abstraction. For example, an operator is represented in a basis as a two-dimensional square n × n array. The numbers in the array are known as the scalar components of the tensor or simply its components. They are denoted by giving their position in the array, as subscripts and superscripts. For example, the components of an order 2 tensor T could be denoted Tij , whether an index is displayed as a superscript or subscript depends on the transformation properties of the tensor, described below. The total number of required to identify each component uniquely is equal to the dimension of the array. However, the term generally has another meaning in the context of matrices. Just as the components of a change when we change the basis of the vector space. Each tensor comes equipped with a law that details how the components of the tensor respond to a change of basis

2.
Tensor
–
In mathematics, tensors are geometric objects that describe linear relations between geometric vectors, scalars, and other tensors. Elementary examples of such include the dot product, the cross product. Geometric vectors, often used in physics and engineering applications, given a coordinate basis or fixed frame of reference, a tensor can be represented as an organized multidimensional array of numerical values. The order of a tensor is the dimensionality of the array needed to represent it, or equivalently, for example, a linear map is represented by a matrix in a basis, and therefore is a 2nd-order tensor. A vector is represented as a 1-dimensional array in a basis, scalars are single numbers and are thus 0th-order tensors. Because they express a relationship between vectors, tensors themselves must be independent of a choice of coordinate system. The precise form of the transformation law determines the type of the tensor, the tensor type is a pair of natural numbers, where n is the number of contravariant indices and m is the number of covariant indices. The total order of a tensor is the sum of two numbers. The concept enabled an alternative formulation of the differential geometry of a manifold in the form of the Riemann curvature tensor. There are several approaches to defining tensors, although seemingly different, the approaches just describe the same geometric concept using different languages and at different levels of abstraction. For example, an operator is represented in a basis as a two-dimensional square n × n array. The numbers in the array are known as the scalar components of the tensor or simply its components. They are denoted by giving their position in the array, as subscripts and superscripts. For example, the components of an order 2 tensor T could be denoted Tij , whether an index is displayed as a superscript or subscript depends on the transformation properties of the tensor, described below. The total number of required to identify each component uniquely is equal to the dimension of the array. However, the term generally has another meaning in the context of matrices. Just as the components of a change when we change the basis of the vector space. Each tensor comes equipped with a law that details how the components of the tensor respond to a change of basis

3.
Covariance and contravariance of vectors
–
In multilinear algebra and tensor analysis, covariance and contravariance describe how the quantitative description of certain geometric or physical entities changes with a change of basis. In physics, a basis is sometimes thought of as a set of reference axes, a change of scale on the reference axes corresponds to a change of units in the problem. For instance, in changing scale from meters to centimeters, the components of a velocity vector will multiply by 100. Vectors exhibit this behavior of changing scale inversely to changes in scale to the reference axes, as a result, vectors often have units of distance or distance times some other unit. In contrast, dual vectors typically have units the inverse of distance or the inverse of distance times some other unit, an example of a dual vector is the gradient, which has units of a spatial derivative, or distance−1. The components of dual vectors change in the way as changes to scale of the reference axes. That is, the matrix that transforms the vector of components must be the inverse of the matrix that transforms the basis vectors, the components of vectors are said to be contravariant. In Einstein notation, contravariant components are denoted with upper indices as in v = v i e i, for a dual vector to be basis-independent, the components of the dual vector must co-vary with a change of basis to remain representing the same covector. That is, the components must be transformed by the matrix as the change of basis matrix. The components of vectors are said to be covariant. Examples of covariant vectors generally appear when taking a gradient of a function, in Einstein notation, covariant components are denoted with lower indices as in v = v i e i. Curvilinear coordinate systems, such as cylindrical or spherical coordinates, are used in physical. Tensors are objects in multilinear algebra that can have aspects of both covariance and contravariance, in physics, a vector typically arises as the outcome of a measurement or series of measurements, and is represented as a list of numbers such as. The numbers in the list depend on the choice of coordinate system, for a vector to represent a geometric object, it must be possible to describe how it looks in any other coordinate system. That is to say, the components of the vectors will transform in a way in passing from one coordinate system to another. A contravariant vector has components that transform as the coordinates do under changes of coordinates, including rotation and dilation. The vector itself does not change under these operations, instead, in other words, if the reference axes were rotated in one direction, the component representation of the vector would rotate in exactly the opposite way. Similarly, if the reference axes were stretched in one direction and this important requirement is what distinguishes a contravariant vector from any other triple of physically meaningful quantities

4.
Linear operator
–
In mathematics, a linear map is a mapping V → W between two modules that preserves the operations of addition and scalar multiplication. An important special case is when V = W, in case the map is called a linear operator, or an endomorphism of V. Sometimes the term linear function has the meaning as linear map. A linear map always maps linear subspaces onto linear subspaces, for instance it maps a plane through the origin to a plane, Linear maps can often be represented as matrices, and simple examples include rotation and reflection linear transformations. In the language of algebra, a linear map is a module homomorphism. In the language of category theory it is a morphism in the category of modules over a given ring, let V and W be vector spaces over the same field K. e. that for any vectors x1. Am ∈ K, the equality holds, f = a 1 f + ⋯ + a m f. It is then necessary to specify which of these fields is being used in the definition of linear. If V and W are considered as spaces over the field K as above, for example, the conjugation of complex numbers is an R-linear map C → C, but it is not C-linear. A linear map from V to K is called a linear functional and these statements generalize to any left-module RM over a ring R without modification, and to any right-module upon reversing of the scalar multiplication. The zero map between two left-modules over the ring is always linear. The identity map on any module is a linear operator, any homothecy centered in the origin of a vector space, v ↦ c v where c is a scalar, is a linear operator. This does not hold in general for modules, where such a map might only be semilinear, for real numbers, the map x ↦ x2 is not linear. Conversely, any map between finite-dimensional vector spaces can be represented in this manner, see the following section. Differentiation defines a map from the space of all differentiable functions to the space of all functions. It also defines an operator on the space of all smooth functions. If V and W are finite-dimensional vector spaces over a field F, then functions that send linear maps f, V → W to dimF × dimF matrices in the way described in the sequel are themselves linear maps. The expected value of a variable is linear, as for random variables X and Y we have E = E + E and E = aE

5.
One-form
–
In linear algebra, a one-form on a vector space is the same as a linear functional on the space. The usage of one-form in this context usually distinguishes the one-forms from higher-degree multilinear functionals on the space, in differential geometry, a one-form on a differentiable manifold is a smooth section of the cotangent bundle. Equivalently, a one-form on a manifold M is a mapping of the total space of the tangent bundle of M to R whose restriction to each fibre is a linear functional on the tangent space. Symbolically, α, T M → R, α x = α | T x M, T x M → R where αx is linear, often one-forms are described locally, particularly in local coordinates. From this perspective, a one-form has a covariant transformation law on passing from one system to another. Thus a one-form is an order 1 covariant tensor field, many real-world concepts can be described as one-forms, Indexing into a vector, The second element of a three-vector is given by the one-form. That is, the element of is · = y. Mean, The mean element of an n-vector is given by the one-form and that is, mean = ⋅ v. Sampling, Sampling with a kernel can be considered a one-form, where the one-form is the kernel shifted to the appropriate location. Net present value of a net flow, R, is given by the one-form w, = −t where i is the discount rate. That is, N P V = ⟨ w, R ⟩ = ∫ t =0 ∞ R t d t, the most basic non-trivial differential one-form is the change in angle form d θ. This is defined as the derivative of the angle function θ, integrating this derivative along a path gives the total change in angle over the path, and integrating over a closed loop gives the winding number. In the language of geometry, this derivative is a one-form, and it is closed but not exact. This is the most basic example of such a form, let U ⊆ R be open, and consider a differentiable function f, U → R, with derivative f. The differential df of f, at a point x 0 ∈ U, is defined as a linear map of the variable dx. Specifically, d f, d x ↦ f ′ d x, hence the map x ↦ d f sends each point x to a linear functional df. This is the simplest example of a differential form, in terms of the de Rham complex, one has an assignment from zero-forms to one-forms i. e. f ↦ d f. Two-form Reciprocal lattice Tensor Inner product

6.
Vector (geometry)
–
In mathematics, physics, and engineering, a Euclidean vector is a geometric object that has magnitude and direction. Vectors can be added to other vectors according to vector algebra, a Euclidean vector is frequently represented by a line segment with a definite direction, or graphically as an arrow, connecting an initial point A with a terminal point B, and denoted by A B →. A vector is what is needed to carry the point A to the point B and it was first used by 18th century astronomers investigating planet rotation around the Sun. The magnitude of the vector is the distance between the two points and the direction refers to the direction of displacement from A to B. These operations and associated laws qualify Euclidean vectors as an example of the more generalized concept of vectors defined simply as elements of a vector space. Vectors play an important role in physics, the velocity and acceleration of a moving object, many other physical quantities can be usefully thought of as vectors. Although most of them do not represent distances, their magnitude and direction can still be represented by the length, the mathematical representation of a physical vector depends on the coordinate system used to describe it. Other vector-like objects that describe physical quantities and transform in a similar way under changes of the system include pseudovectors and tensors. The concept of vector, as we know it today, evolved gradually over a period of more than 200 years, about a dozen people made significant contributions. Giusto Bellavitis abstracted the basic idea in 1835 when he established the concept of equipollence, working in a Euclidean plane, he made equipollent any pair of line segments of the same length and orientation. Essentially he realized an equivalence relation on the pairs of points in the plane, the term vector was introduced by William Rowan Hamilton as part of a quaternion, which is a sum q = s + v of a Real number s and a 3-dimensional vector. Like Bellavitis, Hamilton viewed vectors as representative of classes of equipollent directed segments, grassmanns work was largely neglected until the 1870s. Peter Guthrie Tait carried the standard after Hamilton. His 1867 Elementary Treatise of Quaternions included extensive treatment of the nabla or del operator ∇, in 1878 Elements of Dynamic was published by William Kingdon Clifford. Clifford simplified the quaternion study by isolating the dot product and cross product of two vectors from the complete quaternion product and this approach made vector calculations available to engineers and others working in three dimensions and skeptical of the fourth. Josiah Willard Gibbs, who was exposed to quaternions through James Clerk Maxwells Treatise on Electricity and Magnetism, the first half of Gibbss Elements of Vector Analysis, published in 1881, presents what is essentially the modern system of vector analysis. In 1901 Edwin Bidwell Wilson published Vector Analysis, adapted from Gibbs lectures, in physics and engineering, a vector is typically regarded as a geometric entity characterized by a magnitude and a direction. It is formally defined as a line segment, or arrow

7.
Scalar (mathematics)
–
A scalar is an element of a field which is used to define a vector space. A quantity described by multiple scalars, such as having both direction and magnitude, is called a vector, more generally, a vector space may be defined by using any field instead of real numbers, such as complex numbers. Then the scalars of that space will be the elements of the associated field. A scalar product operation – not to be confused with scalar multiplication – may be defined on a vector space, a vector space equipped with a scalar product is called an inner product space. The real component of a quaternion is also called its scalar part, the term is also sometimes used informally to mean a vector, matrix, tensor, or other usually compound value that is actually reduced to a single component. Thus, for example, the product of a 1×n matrix and an n×1 matrix, the term scalar matrix is used to denote a matrix of the form kI where k is a scalar and I is the identity matrix. The word scalar derives from the Latin word scalaris, a form of scala. The English word scale also comes from scala, according to a citation in the Oxford English Dictionary the first recorded usage of the term scalar in English came with W. R. A vector space is defined as a set of vectors, a set of scalars, and a multiplication operation that takes a scalar k. For example, in a space, the scalar multiplication k yields. In a function space, kƒ is the function x ↦ k, the scalars can be taken from any field, including the rational, algebraic, real, and complex numbers, as well as finite fields. According to a theorem of linear algebra, every vector space has a basis. It follows that every vector space over a scalar field K is isomorphic to a vector space where the coordinates are elements of K. For example, every vector space of dimension n is isomorphic to n-dimensional real space Rn. Alternatively, a vector space V can be equipped with a function that assigns to every vector v in V a scalar ||v||. By definition. If ||v|| is interpreted as the length of v, this operation can be described as scaling the length of v by k, a vector space equipped with a norm is called a normed vector space. The norm is defined to be an element of Vs scalar field K. Moreover, if V has dimension 2 or more, K must be closed under square root, as well as the four operations, thus the rational numbers Q are excluded

8.
International Standard Book Number
–
The International Standard Book Number is a unique numeric commercial book identifier. An ISBN is assigned to each edition and variation of a book, for example, an e-book, a paperback and a hardcover edition of the same book would each have a different ISBN. The ISBN is 13 digits long if assigned on or after 1 January 2007, the method of assigning an ISBN is nation-based and varies from country to country, often depending on how large the publishing industry is within a country. The initial ISBN configuration of recognition was generated in 1967 based upon the 9-digit Standard Book Numbering created in 1966, the 10-digit ISBN format was developed by the International Organization for Standardization and was published in 1970 as international standard ISO2108. Occasionally, a book may appear without a printed ISBN if it is printed privately or the author does not follow the usual ISBN procedure, however, this can be rectified later. Another identifier, the International Standard Serial Number, identifies periodical publications such as magazines, the ISBN configuration of recognition was generated in 1967 in the United Kingdom by David Whitaker and in 1968 in the US by Emery Koltay. The 10-digit ISBN format was developed by the International Organization for Standardization and was published in 1970 as international standard ISO2108, the United Kingdom continued to use the 9-digit SBN code until 1974. The ISO on-line facility only refers back to 1978, an SBN may be converted to an ISBN by prefixing the digit 0. For example, the edition of Mr. J. G. Reeder Returns, published by Hodder in 1965, has SBN340013818 -340 indicating the publisher,01381 their serial number. This can be converted to ISBN 0-340-01381-8, the check digit does not need to be re-calculated, since 1 January 2007, ISBNs have contained 13 digits, a format that is compatible with Bookland European Article Number EAN-13s. An ISBN is assigned to each edition and variation of a book, for example, an ebook, a paperback, and a hardcover edition of the same book would each have a different ISBN. The ISBN is 13 digits long if assigned on or after 1 January 2007, a 13-digit ISBN can be separated into its parts, and when this is done it is customary to separate the parts with hyphens or spaces. Separating the parts of a 10-digit ISBN is also done with either hyphens or spaces, figuring out how to correctly separate a given ISBN number is complicated, because most of the parts do not use a fixed number of digits. ISBN issuance is country-specific, in that ISBNs are issued by the ISBN registration agency that is responsible for country or territory regardless of the publication language. Some ISBN registration agencies are based in national libraries or within ministries of culture, in other cases, the ISBN registration service is provided by organisations such as bibliographic data providers that are not government funded. In Canada, ISBNs are issued at no cost with the purpose of encouraging Canadian culture. In the United Kingdom, United States, and some countries, where the service is provided by non-government-funded organisations. Australia, ISBNs are issued by the library services agency Thorpe-Bowker

9.
The Road to Reality
–
The Road to Reality, A Complete Guide to the Laws of the Universe is a book on modern physics by the British mathematical physicist Roger Penrose, published in 2004. It covers the basics of the Standard Model of particle physics, discussing general relativity and quantum mechanics, the book discusses the physical world. Many fields that 19th century scientists believed were separate, such as electricity, some texts, both popular and university level, introduce these topics as separate concepts, and then reveal their combination much later. The Road to Reality reverses this process, first expounding the underlying mathematics of space–time, then showing how electromagnetism, physics enters the discussion on page 383 with the topic of space–time. Energy and conservation laws appear in the discussion of Lagrangians and Hamiltonians, before moving on to a discussion of quantum physics, particle theory. A discussion of the measurement problem in mechanics is given a full chapter, superstrings are given a chapter near the end of the book, as are loop gravity. The book ends with an exploration of other theories and possible ways forward, the final chapters reflect Penroses personal perspective, which differs in some respects from what he regards as the current fashion among theoretical physicists. He is skeptical about string theory, to which he prefers loop quantum gravity and he is optimistic about his own approach, twistor theory. He also holds some controversial views about the role of consciousness in physics, so here, then, are all the laws of the universe, in one handy 1, 100-page volume. It would appear that they are more complicated than the laws of cricket. Moreover, it says on the front cover that it is a Sunday Times Top 10 Bestseller, and on the back, next to the price, open it up at random and you will see that Jordans autobiography this aint. Archive of the Road to Reality internet forum, now defunct, solutions for many Road to Reality exercises

10.
Mathematics
–
Mathematics is the study of topics such as quantity, structure, space, and change. There is a range of views among mathematicians and philosophers as to the exact scope, Mathematicians seek out patterns and use them to formulate new conjectures. Mathematicians resolve the truth or falsity of conjectures by mathematical proof, when mathematical structures are good models of real phenomena, then mathematical reasoning can provide insight or predictions about nature. Through the use of abstraction and logic, mathematics developed from counting, calculation, measurement, practical mathematics has been a human activity from as far back as written records exist. The research required to solve mathematical problems can take years or even centuries of sustained inquiry, rigorous arguments first appeared in Greek mathematics, most notably in Euclids Elements. Galileo Galilei said, The universe cannot be read until we have learned the language and it is written in mathematical language, and the letters are triangles, circles and other geometrical figures, without which means it is humanly impossible to comprehend a single word. Without these, one is wandering about in a dark labyrinth, carl Friedrich Gauss referred to mathematics as the Queen of the Sciences. Benjamin Peirce called mathematics the science that draws necessary conclusions, David Hilbert said of mathematics, We are not speaking here of arbitrariness in any sense. Mathematics is not like a game whose tasks are determined by arbitrarily stipulated rules, rather, it is a conceptual system possessing internal necessity that can only be so and by no means otherwise. Albert Einstein stated that as far as the laws of mathematics refer to reality, they are not certain, Mathematics is essential in many fields, including natural science, engineering, medicine, finance and the social sciences. Applied mathematics has led to entirely new mathematical disciplines, such as statistics, Mathematicians also engage in pure mathematics, or mathematics for its own sake, without having any application in mind. There is no clear line separating pure and applied mathematics, the history of mathematics can be seen as an ever-increasing series of abstractions. The earliest uses of mathematics were in trading, land measurement, painting and weaving patterns, in Babylonian mathematics elementary arithmetic first appears in the archaeological record. Numeracy pre-dated writing and numeral systems have many and diverse. Between 600 and 300 BC the Ancient Greeks began a study of mathematics in its own right with Greek mathematics. Mathematics has since been extended, and there has been a fruitful interaction between mathematics and science, to the benefit of both. Mathematical discoveries continue to be made today, the overwhelming majority of works in this ocean contain new mathematical theorems and their proofs. The word máthēma is derived from μανθάνω, while the modern Greek equivalent is μαθαίνω, in Greece, the word for mathematics came to have the narrower and more technical meaning mathematical study even in Classical times

11.
Coordinate system
–
The order of the coordinates is significant, and they are sometimes identified by their position in an ordered tuple and sometimes by a letter, as in the x-coordinate. The coordinates are taken to be real numbers in elementary mathematics, the use of a coordinate system allows problems in geometry to be translated into problems about numbers and vice versa, this is the basis of analytic geometry. The simplest example of a system is the identification of points on a line with real numbers using the number line. In this system, an arbitrary point O is chosen on a given line. The coordinate of a point P is defined as the distance from O to P. Each point is given a unique coordinate and each number is the coordinate of a unique point. The prototypical example of a system is the Cartesian coordinate system. In the plane, two lines are chosen and the coordinates of a point are taken to be the signed distances to the lines. In three dimensions, three perpendicular planes are chosen and the three coordinates of a point are the distances to each of the planes. This can be generalized to create n coordinates for any point in n-dimensional Euclidean space, depending on the direction and order of the coordinate axis the system may be a right-hand or a left-hand system. This is one of many coordinate systems, another common coordinate system for the plane is the polar coordinate system. A point is chosen as the pole and a ray from this point is taken as the polar axis, for a given angle θ, there is a single line through the pole whose angle with the polar axis is θ. Then there is a point on this line whose signed distance from the origin is r for given number r. For a given pair of coordinates there is a single point, for example, and are all polar coordinates for the same point. The pole is represented by for any value of θ, there are two common methods for extending the polar coordinate system to three dimensions. In the cylindrical coordinate system, a z-coordinate with the meaning as in Cartesian coordinates is added to the r and θ polar coordinates giving a triple. Spherical coordinates take this a further by converting the pair of cylindrical coordinates to polar coordinates giving a triple. A point in the plane may be represented in coordinates by a triple where x/z and y/z are the Cartesian coordinates of the point

12.
Euclidean geometry
–
Euclidean geometry is a mathematical system attributed to the Alexandrian Greek mathematician Euclid, which he described in his textbook on geometry, the Elements. Euclids method consists in assuming a set of intuitively appealing axioms. Although many of Euclids results had been stated by earlier mathematicians, Euclid was the first to show how these propositions could fit into a comprehensive deductive and logical system. The Elements begins with plane geometry, still taught in school as the first axiomatic system. It goes on to the geometry of three dimensions. Much of the Elements states results of what are now called algebra and number theory, for more than two thousand years, the adjective Euclidean was unnecessary because no other sort of geometry had been conceived. Euclids axioms seemed so obvious that any theorem proved from them was deemed true in an absolute, often metaphysical. Today, however, many other self-consistent non-Euclidean geometries are known, Euclidean geometry is an example of synthetic geometry, in that it proceeds logically from axioms to propositions without the use of coordinates. This is in contrast to analytic geometry, which uses coordinates, the Elements is mainly a systematization of earlier knowledge of geometry. Its improvement over earlier treatments was recognized, with the result that there was little interest in preserving the earlier ones. There are 13 total books in the Elements, Books I–IV, Books V and VII–X deal with number theory, with numbers treated geometrically via their representation as line segments with various lengths. Notions such as numbers and rational and irrational numbers are introduced. The infinitude of prime numbers is proved, a typical result is the 1,3 ratio between the volume of a cone and a cylinder with the same height and base. Euclidean geometry is a system, in which all theorems are derived from a small number of axioms. To produce a straight line continuously in a straight line. To describe a circle with any centre and distance and that all right angles are equal to one another. Although Euclids statement of the only explicitly asserts the existence of the constructions. The Elements also include the five common notions, Things that are equal to the same thing are also equal to one another

13.
Tensor algebra
–
In mathematics, the tensor algebra of a vector space V, denoted T or T •, is the algebra of tensors on V with multiplication being the tensor product. The tensor algebra is important because many other algebras arise as quotient algebras of T and these include the exterior algebra, the symmetric algebra, Clifford algebras, the Weyl algebra and universal enveloping algebras. Note, In this article, all algebras are assumed to be unital, the unit is explicitly required to define the coproduct. Let V be a space over a field K. For any nonnegative integer k, we define the kth power of V to be the tensor product of V with itself k times. That is, TkV consists of all tensors on V of order k, by convention T0V is the ground field K. We then construct T as the sum of TkV for k =0,1,2, … T = ⨁ k =0 ∞ T k V = K ⊕ V ⊕ ⊕ ⊕ ⋯. The multiplication in T is determined by the canonical isomorphism T k V ⊗ T ℓ V → T k + ℓ V given by the tensor product and this multiplication rule implies that the tensor algebra T is naturally a graded algebra with TkV serving as the grade-k subspace. This grading can be extended to a Z grading by appending subspaces T k V = for negative integers k, the construction generalizes in straightforward manner to the tensor algebra of any module M over a commutative ring. If R is a ring, one can still perform the construction for any R-R bimodule M. The tensor algebra T is also called the free algebra on the vector space V, as with other free constructions, the functor T is left adjoint to some forgetful functor. In this case, its the functor sends each K-algebra to its underlying vector space. One can, in fact, define the tensor algebra T as the unique algebra satisfying this property, the above universal property shows that the construction of the tensor algebra is functorial in nature. That is, T is a functor from K-Vect, the category of vector spaces over K, to K-Alg, the functoriality of T means that any linear map from V to W extends uniquely to an algebra homomorphism from T to T. If V has finite dimension n, another way of looking at the tensor algebra is as the algebra of polynomials over K in n non-commuting variables. If we take basis vectors for V, those become non-commuting variables in T, subject to no constraints beyond associativity, examples of this are the exterior algebra, the symmetric algebra, Clifford algebras, the Weyl algebra and universal enveloping algebras. The tensor algebra has two different coalgebra structures, one is compatible with the tensor product, and thus can be extended to a bialgebra, and can be further be extended with an antipode to a Hopf algebra structure. The other structure, although simpler, cannot be extended to a bialgebra, the first structure is developed immediately below, the second structure is given in the section on the cofree coalgebra, further down

14.
Differential geometry
–
Differential geometry is a mathematical discipline that uses the techniques of differential calculus, integral calculus, linear algebra and multilinear algebra to study problems in geometry. The theory of plane and space curves and surfaces in the three-dimensional Euclidean space formed the basis for development of differential geometry during the 18th century, since the late 19th century, differential geometry has grown into a field concerned more generally with the geometric structures on differentiable manifolds. Differential geometry is related to differential topology and the geometric aspects of the theory of differential equations. The differential geometry of surfaces captures many of the key ideas, Differential geometry arose and developed as a result of and in connection to the mathematical analysis of curves and surfaces. These unanswered questions indicated greater, hidden relationships, initially applied to the Euclidean space, further explorations led to non-Euclidean space, and metric and topological spaces. Riemannian geometry studies Riemannian manifolds, smooth manifolds with a Riemannian metric and this is a concept of distance expressed by means of a smooth positive definite symmetric bilinear form defined on the tangent space at each point. Various concepts based on length, such as the arc length of curves, area of plane regions, the notion of a directional derivative of a function from multivariable calculus is extended in Riemannian geometry to the notion of a covariant derivative of a tensor. Many concepts and techniques of analysis and differential equations have been generalized to the setting of Riemannian manifolds, a distance-preserving diffeomorphism between Riemannian manifolds is called an isometry. This notion can also be defined locally, i. e. for small neighborhoods of points, any two regular curves are locally isometric. In higher dimensions, the Riemann curvature tensor is an important pointwise invariant associated with a Riemannian manifold that measures how close it is to being flat, an important class of Riemannian manifolds is the Riemannian symmetric spaces, whose curvature is not necessarily constant. These are the closest analogues to the plane and space considered in Euclidean and non-Euclidean geometry. Pseudo-Riemannian geometry generalizes Riemannian geometry to the case in which the metric tensor need not be positive-definite, a special case of this is a Lorentzian manifold, which is the mathematical basis of Einsteins general relativity theory of gravity. Finsler geometry has the Finsler manifold as the object of study. This is a manifold with a Finsler metric, i. e. a Banach norm defined on each tangent space. Riemannian manifolds are special cases of the more general Finsler manifolds. A Finsler structure on a manifold M is a function F, TM → [0, ∞) such that, F = |m|F for all x, y in TM, F is infinitely differentiable in TM −, symplectic geometry is the study of symplectic manifolds. A symplectic manifold is an almost symplectic manifold for which the symplectic form ω is closed, a diffeomorphism between two symplectic manifolds which preserves the symplectic form is called a symplectomorphism. Non-degenerate skew-symmetric bilinear forms can only exist on even-dimensional vector spaces, in dimension 2, a symplectic manifold is just a surface endowed with an area form and a symplectomorphism is an area-preserving diffeomorphism

15.
Physics
–
Physics is the natural science that involves the study of matter and its motion and behavior through space and time, along with related concepts such as energy and force. One of the most fundamental disciplines, the main goal of physics is to understand how the universe behaves. Physics is one of the oldest academic disciplines, perhaps the oldest through its inclusion of astronomy, Physics intersects with many interdisciplinary areas of research, such as biophysics and quantum chemistry, and the boundaries of physics are not rigidly defined. New ideas in physics often explain the mechanisms of other sciences while opening new avenues of research in areas such as mathematics. Physics also makes significant contributions through advances in new technologies that arise from theoretical breakthroughs, the United Nations named 2005 the World Year of Physics. Astronomy is the oldest of the natural sciences, the stars and planets were often a target of worship, believed to represent their gods. While the explanations for these phenomena were often unscientific and lacking in evidence, according to Asger Aaboe, the origins of Western astronomy can be found in Mesopotamia, and all Western efforts in the exact sciences are descended from late Babylonian astronomy. The most notable innovations were in the field of optics and vision, which came from the works of many scientists like Ibn Sahl, Al-Kindi, Ibn al-Haytham, Al-Farisi and Avicenna. The most notable work was The Book of Optics, written by Ibn Al-Haitham, in which he was not only the first to disprove the ancient Greek idea about vision, but also came up with a new theory. In the book, he was also the first to study the phenomenon of the pinhole camera, many later European scholars and fellow polymaths, from Robert Grosseteste and Leonardo da Vinci to René Descartes, Johannes Kepler and Isaac Newton, were in his debt. Indeed, the influence of Ibn al-Haythams Optics ranks alongside that of Newtons work of the same title, the translation of The Book of Optics had a huge impact on Europe. From it, later European scholars were able to build the devices as what Ibn al-Haytham did. From this, such important things as eyeglasses, magnifying glasses, telescopes, Physics became a separate science when early modern Europeans used experimental and quantitative methods to discover what are now considered to be the laws of physics. Newton also developed calculus, the study of change, which provided new mathematical methods for solving physical problems. The discovery of new laws in thermodynamics, chemistry, and electromagnetics resulted from greater research efforts during the Industrial Revolution as energy needs increased, however, inaccuracies in classical mechanics for very small objects and very high velocities led to the development of modern physics in the 20th century. Modern physics began in the early 20th century with the work of Max Planck in quantum theory, both of these theories came about due to inaccuracies in classical mechanics in certain situations. Quantum mechanics would come to be pioneered by Werner Heisenberg, Erwin Schrödinger, from this early work, and work in related fields, the Standard Model of particle physics was derived. Areas of mathematics in general are important to this field, such as the study of probabilities, in many ways, physics stems from ancient Greek philosophy

16.
Engineering
–
The term Engineering is derived from the Latin ingenium, meaning cleverness and ingeniare, meaning to contrive, devise. Engineering has existed since ancient times as humans devised fundamental inventions such as the wedge, lever, wheel, each of these inventions is essentially consistent with the modern definition of engineering. The term engineering is derived from the engineer, which itself dates back to 1390 when an engineer originally referred to a constructor of military engines. In this context, now obsolete, a referred to a military machine. Notable examples of the obsolete usage which have survived to the present day are military engineering corps, the word engine itself is of even older origin, ultimately deriving from the Latin ingenium, meaning innate quality, especially mental power, hence a clever invention. The earliest civil engineer known by name is Imhotep, as one of the officials of the Pharaoh, Djosèr, he probably designed and supervised the construction of the Pyramid of Djoser at Saqqara in Egypt around 2630–2611 BC. Ancient Greece developed machines in both civilian and military domains, the Antikythera mechanism, the first known mechanical computer, and the mechanical inventions of Archimedes are examples of early mechanical engineering. In the Middle Ages, the trebuchet was developed, the first steam engine was built in 1698 by Thomas Savery. The development of this gave rise to the Industrial Revolution in the coming decades. With the rise of engineering as a profession in the 18th century, similarly, in addition to military and civil engineering, the fields then known as the mechanic arts became incorporated into engineering. The inventions of Thomas Newcomen and the Scottish engineer James Watt gave rise to mechanical engineering. The development of specialized machines and machine tools during the revolution led to the rapid growth of mechanical engineering both in its birthplace Britain and abroad. John Smeaton was the first self-proclaimed civil engineer and is regarded as the father of civil engineering. He was an English civil engineer responsible for the design of bridges, canals, harbours and he was also a capable mechanical engineer and an eminent physicist. Smeaton designed the third Eddystone Lighthouse where he pioneered the use of hydraulic lime and his lighthouse remained in use until 1877 and was dismantled and partially rebuilt at Plymouth Hoe where it is known as Smeatons Tower. The United States census of 1850 listed the occupation of engineer for the first time with a count of 2,000, there were fewer than 50 engineering graduates in the U. S. before 1865. In 1870 there were a dozen U. S. mechanical engineering graduates, in 1890 there were 6,000 engineers in civil, mining, mechanical and electrical. There was no chair of applied mechanism and applied mechanics established at Cambridge until 1875, the theoretical work of James Maxwell and Heinrich Hertz in the late 19th century gave rise to the field of electronics

17.
Continuum mechanics
–
Continuum mechanics is a branch of mechanics that deals with the analysis of the kinematics and the mechanical behavior of materials modeled as a continuous mass rather than as discrete particles. The French mathematician Augustin-Louis Cauchy was the first to formulate such models in the 19th century, research in the area continues till today. Modeling an object as a continuum assumes that the substance of the object completely fills the space it occupies, Continuum mechanics deals with physical properties of solids and fluids which are independent of any particular coordinate system in which they are observed. These physical properties are represented by tensors, which are mathematical objects that have the required property of being independent of coordinate system. These tensors can be expressed in coordinate systems for computational convenience, Materials, such as solids, liquids and gases, are composed of molecules separated by space. On a microscopic scale, materials have cracks and discontinuities, a continuum is a body that can be continually sub-divided into infinitesimal elements with properties being those of the bulk material. More specifically, the continuum hypothesis/assumption hinges on the concepts of an elementary volume. This condition provides a link between an experimentalists and a viewpoint on constitutive equations as well as a way of spatial and statistical averaging of the microstructure. The latter then provide a basis for stochastic finite elements. The levels of SVE and RVE link continuum mechanics to statistical mechanics, the RVE may be assessed only in a limited way via experimental testing, when the constitutive response becomes spatially homogeneous. Specifically for fluids, the Knudsen number is used to assess to what extent the approximation of continuity can be made, consider car traffic on a highway---with just one lane for simplicity. Somewhat surprisingly, and in a tribute to its effectiveness, continuum mechanics effectively models the movement of cars via a differential equation for the density of cars. The familiarity of this situation empowers us to understand a little of the continuum-discrete dichotomy underlying continuum modelling in general. To start modelling define that, x measure distance along the highway, t is time, ρ is the density of cars on the highway, cars do not appear and disappear. Consider any group of cars, from the car at the back of the group located at x = a to the particular car at the front located at x = b. The total number of cars in this group N = ∫ a b ρ d x, since cars are conserved d N / d t =0. The only way an integral can be zero for all intervals is if the integrand is zero for all x, consequently, conservation derives the first order nonlinear conservation PDE ∂ ρ ∂ t + ∂ ∂ x =0 for all positions on the highway. This conservation PDE applies not only to car traffic but also to fluids, solids, crowds, animals, plants, bushfires, financial traders and this PDE is one equation with two unknowns, so another equation is needed to form a well posed problem

18.
Electromagnetism
–
Electromagnetism is a branch of physics involving the study of the electromagnetic force, a type of physical interaction that occurs between electrically charged particles. The electromagnetic force usually exhibits electromagnetic fields such as fields, magnetic fields. The other three fundamental interactions are the interaction, the weak interaction, and gravitation. The word electromagnetism is a form of two Greek terms, ἤλεκτρον, ēlektron, amber, and μαγνῆτις λίθος magnētis lithos, which means magnesian stone. The electromagnetic force plays a role in determining the internal properties of most objects encountered in daily life. Ordinary matter takes its form as a result of forces between individual atoms and molecules in matter, and is a manifestation of the electromagnetic force. Electrons are bound by the force to atomic nuclei, and their orbital shapes. The electromagnetic force governs the processes involved in chemistry, which arise from interactions between the electrons of neighboring atoms, there are numerous mathematical descriptions of the electromagnetic field. In classical electrodynamics, electric fields are described as electric potential, although electromagnetism is considered one of the four fundamental forces, at high energy the weak force and electromagnetic force are unified as a single electroweak force. In the history of the universe, during the epoch the unified force broke into the two separate forces as the universe cooled. Originally, electricity and magnetism were considered to be two separate forces, Magnetic poles attract or repel one another in a manner similar to positive and negative charges and always exist as pairs, every north pole is yoked to a south pole. An electric current inside a wire creates a corresponding magnetic field outside the wire. Its direction depends on the direction of the current in the wire. A current is induced in a loop of wire when it is moved toward or away from a field, or a magnet is moved towards or away from it. While preparing for a lecture on 21 April 1820, Hans Christian Ørsted made a surprising observation. As he was setting up his materials, he noticed a compass needle deflected away from north when the electric current from the battery he was using was switched on. At the time of discovery, Ørsted did not suggest any explanation of the phenomenon. However, three later he began more intensive investigations

19.
Transport phenomena
–
In engineering, physics and chemistry, the study of transport phenomena concerns the exchange of mass, energy, and momentum between observed and studied systems. While it draws from fields as diverse as continuum mechanics and thermodynamics, while it draws its theoretical foundation from principles in a number of fields, most of the fundamental transport theory is a restatement of basic conservation laws. Thus, the different phenomena that lead to transport are each considered individually with the knowledge that the sum of their contributions must equal zero and this principle is useful for calculating many relevant quantities. For example, in mechanics, a common use of transport analysis is to determine the velocity profile of a fluid flowing through a rigid volume. Transport phenomena are ubiquitous throughout the engineering disciplines and it is now considered to be a part of the engineering discipline as much as thermodynamics, mechanics, and electromagnetism. Transport phenomena encompass all agents of change in the universe. Moreover, they are considered to be building blocks which developed the universe. However, the scope here is limited to the relationship of transport phenomena to artificial engineered systems, in physics, transport phenomena are all irreversible processes of statistical nature stemming from the random continuous motion of molecules, mostly observed in fluids. Every aspect of transport phenomena is grounded in two primary concepts, the laws, and the constitutive equations. The conservation laws, which in the context of transport phenomena are formulated as continuity equations, the constitutive equations describe how the quantity in question responds to various stimuli via transport. These equations also demonstrate the connection between transport phenomena and thermodynamics, a connection that explains why transport phenomena are irreversible. Almost all of physical phenomena ultimately involve systems seeking their lowest energy state in keeping with the principle of minimum energy. As they approach this state, they tend to achieve true thermodynamic equilibrium, at which point there are no longer any driving forces in the system, examples of transport processes include heat conduction, fluid flow, molecular diffusion, radiation and electric charge transfer in semiconductors. For example, in solid state physics, the motion and interaction of electrons, holes, another example is in biomedical engineering, where some transport phenomena of interest are thermoregulation, perfusion, and microfluidics. In chemical engineering, transport phenomena are studied in reactor design, analysis of molecular or diffusive transport mechanisms, and metallurgy. The transport of mass, energy, and momentum can be affected by the presence of external sources, the rate of cooling of a solid that is conducting heat depends on whether a heat source is applied. The gravitational force acting on a rain drop counteracts the resistance or drag imparted by the surrounding air, an important principle in the study of transport phenomena is analogy between phenomena. Energy, the conduction of heat in a material is an example of heat diffusion

20.
General relativity
–
General relativity is the geometric theory of gravitation published by Albert Einstein in 1915 and the current description of gravitation in modern physics. General relativity generalizes special relativity and Newtons law of gravitation, providing a unified description of gravity as a geometric property of space and time. In particular, the curvature of spacetime is directly related to the energy and momentum of whatever matter, the relation is specified by the Einstein field equations, a system of partial differential equations. Examples of such differences include gravitational time dilation, gravitational lensing, the redshift of light. The predictions of relativity have been confirmed in all observations. Although general relativity is not the only theory of gravity. Einsteins theory has important astrophysical implications, for example, it implies the existence of black holes—regions of space in which space and time are distorted in such a way that nothing, not even light, can escape—as an end-state for massive stars. The bending of light by gravity can lead to the phenomenon of gravitational lensing, General relativity also predicts the existence of gravitational waves, which have since been observed directly by physics collaboration LIGO. In addition, general relativity is the basis of current cosmological models of an expanding universe. Soon after publishing the special theory of relativity in 1905, Einstein started thinking about how to incorporate gravity into his new relativistic framework. In 1907, beginning with a thought experiment involving an observer in free fall. After numerous detours and false starts, his work culminated in the presentation to the Prussian Academy of Science in November 1915 of what are now known as the Einstein field equations. These equations specify how the geometry of space and time is influenced by whatever matter and radiation are present, the Einstein field equations are nonlinear and very difficult to solve. Einstein used approximation methods in working out initial predictions of the theory, but as early as 1916, the astrophysicist Karl Schwarzschild found the first non-trivial exact solution to the Einstein field equations, the Schwarzschild metric. This solution laid the groundwork for the description of the stages of gravitational collapse. In 1917, Einstein applied his theory to the universe as a whole, in line with contemporary thinking, he assumed a static universe, adding a new parameter to his original field equations—the cosmological constant—to match that observational presumption. By 1929, however, the work of Hubble and others had shown that our universe is expanding and this is readily described by the expanding cosmological solutions found by Friedmann in 1922, which do not require a cosmological constant. Lemaître used these solutions to formulate the earliest version of the Big Bang models, in which our universe has evolved from an extremely hot, Einstein later declared the cosmological constant the biggest blunder of his life

21.
Computer vision
–
Computer vision is an interdisciplinary field that deals with how computers can be made for gaining high-level understanding from digital images or videos. From the perspective of engineering, it seeks to automate tasks that the visual system can do. g. in the forms of decisions. Understanding in this means the transformation of visual images into descriptions of the world that can interface with other thought processes. This image understanding can be seen as the disentangling of symbolic information from image data using models constructed with the aid of geometry, physics, statistics, as a scientific discipline, computer vision is concerned with the theory behind artificial systems that extract information from images. The image data can take many forms, such as sequences, views from multiple cameras. As a technological discipline, computer vision seeks to apply its theories, sub-domains of computer vision include scene reconstruction, event detection, video tracking, object recognition, object pose estimation, learning, indexing, motion estimation, and image restoration. Computer vision is a field that deals with how computers can be made for gaining high-level understanding from digital images or videos. From the perspective of engineering, it seeks to automate tasks that the visual system can do. Computer vision is concerned with the extraction, analysis and understanding of useful information from a single image or a sequence of images. It involves the development of a theoretical and algorithmic basis to achieve automatic visual understanding, as a scientific discipline, computer vision is concerned with the theory behind artificial systems that extract information from images. The image data can take many forms, such as sequences, views from multiple cameras. As a technological discipline, computer vision seeks to apply its theories, in the late 1960s, computer vision began at universities that were pioneering artificial intelligence. It was meant to mimic the visual system, as a stepping stone to endowing robots with intelligent behavior. In 1966, it was believed that this could be achieved through a project, by attaching a camera to a computer. The next decade saw studies based on rigorous mathematical analysis. These include the concept of scale-space, the inference of shape from various cues such as shading, texture and focus, researchers also realized that many of these mathematical concepts could be treated within the same optimization framework as regularization and Markov random fields. By the 1990s, some of the research topics became more active than the others. Research in projective 3-D reconstructions led to better understanding of camera calibration, with the advent of optimization methods for camera calibration, it was realized that a lot of the ideas were already explored in bundle adjustment theory from the field of photogrammetry

22.
Index notation
–
In mathematics and computer programming, index notation is used to specify the elements of an array of numbers. The formalism of how indices are used according to the subject. It is frequently helpful in mathematics to refer to the elements of an array using subscripts, the subscripts can be integers or variables. The array takes the form of tensors in general, since these can be treated as multi-dimensional arrays, special cases are vectors and matrices. The following is only an introduction to the concept, index notation is used in more detail in mathematics, see the main article for further details. For example, given the vector, a = then some entries are a 1 =10, the notation can be applied to vectors in mathematics and physics. The following vector equation a + b = c can also be written in terms of the elements of the vector and this expression represents a set of equations, one for each index. If the vectors each have n elements, meaning i =1,2, the notation ij should not be confused with i multiplied by j, it is read as i - j. For example, if A = then some entries are a 11 =9, a 12 =8, for indices larger than 9, the comma-based notation may be superior. Matrix equations are written similarly to vector equations, such as A + B = C in terms of the elements of the matrices A i j + B i j = C i j for all values of i and j. Again this expression represents a set of equations, one for each index, if the matrices each have m rows and n columns, meaning i =1,2. m and j =1,2. n, then there are mn equations. The notation allows a clear generalization to multi-dimensional arrays of elements, for example, A i 1 i 2 ⋯ + B i 1 i 2 ⋯ = C i 1 i 2 ⋯ representing a set of many equations. In tensor analysis, superscripts are used instead of subscripts to distinguish covariant from contravariant entities, see covariance and contravariance of vectors, in several programming languages, index notation is a way of addressing elements of an array. In general, the address of the ith element of an array with base address b and element size s is b+is. In the C programming language, we can write the above as * or base, coincidentally, since pointer addition is commutative, this allows for obscure expressions such as 3 which is equivalent to base. Things become more interesting when we consider arrays with more than one index, for example, a two-dimensional table. e. When the first method is used, the programmer decides how the elements of the array are laid out in the computers memory, the second method is used when the number of elements in each row is the same and known at the time the program is written. The programmer declares the array to have, say, three columns by writing e. g. elementtype tablename, one then refers to a particular element of the array by writing tablename

23.
Penrose graphical notation
–
In mathematics and physics, Penrose graphical notation or tensor diagram notation is a visual depiction of multilinear functions or tensors proposed by Roger Penrose in 1971. A diagram in the consists of several shapes linked together by lines. The notation has been studied extensively by Predrag Cvitanović, who used it to classify the classical Lie groups and it has also been generalized using representation theory to spin networks in physics, and with the presence of matrix groups to trace diagrams in linear algebra. In the language of algebra, each shape represents a multilinear function. The lines attached to represent the inputs or outputs of a function. Connecting lines between two shapes corresponds to contraction of indices, one advantage of this notation is that one does not have to invent new letters for new indices. This notation is also explicitly basis-independent, each shape represents a matrix, and tensor multiplication is done horizontally, and matrix multiplication is done vertically. The metric tensor is represented by a U-shaped loop or an upside-down U-shaped loop, the Levi-Civita antisymmetric tensor is represented by a thick horizontal bar with sticks pointing downwards or upwards, depending on the type of tensor that is used. The structure constants of a Lie algebra are represented by a triangle with one line pointing upwards. Contraction of indices is represented by joining the lines together. Symmetrization of indices is represented by a thick zig-zag or wavy bar crossing the index lines horizontally, antisymmetrization of indices is represented by a thick straight line crossing the index lines horizontally. The determinant is formed by applying antisymmetrization to the indices, the covariant derivative is represented by a circle around the tensor to be differentiated and a line joined from the circle pointing downwards to represent the lower index of the derivative. The diagrammatic notation is useful in manipulating tensor algebra and it usually involves a few simple identities of tensor manipulations. C = n. where n is the number of dimensions, is a common identity, the Ricci and Bianchi identities given in terms of the Riemann curvature tensor illustrate the power of the notation The notation has been extended with support for spinors and twistors. Abstract index notation Angular momentum diagrams Braided monoidal category Categorical quantum mechanics uses tensor diagram notation Ricci calculus Spin networks Trace diagram

24.
Voigt notation
–
In mathematics, Voigt notation or Voigt form in multilinear algebra is a way to represent a symmetric tensor by reducing its order. There are a few variants and associated names for this idea, Mandel notation, Mandel–Voigt notation, Kelvin notation is a revival by Helbig of old ideas of Lord Kelvin. The differences here lie in certain weights attached to the entries of the tensor. Nomenclature may vary according to what is traditional in the field of application, for example, a 2×2 symmetric tensor X has only three distinct elements, the two on the diagonal and the other being off-diagonal. Thus it can be expressed as the vector ⟨ x 11, x 22, as another example, The stress tensor is given as σ =. In Voigt notation it is simplified to a 6-dimensional vector, σ ~ = ≡, the strain tensor, similar in nature to the stress tensor—both are symmetric second-order tensors --, is given in matrix form as ϵ =. Its representation in Voigt notation is ϵ ~ = ≡, where γ x y =2 ϵ x y, γ y z =2 ϵ y z, and γ z x =2 ϵ z x are engineering shear strains. The benefit of using different representations for stress and strain is that the scalar invariance σ ⋅ ϵ = σ i j ϵ i j = σ ~ ⋅ ϵ ~ is preserved, likewise, a three-dimensional symmetric fourth-order tensor can be reduced to a 6×6 matrix. Voigt indexes are numbered consecutively from the point to the end. For a symmetric tensor of second rank σ = only six components are distinct, the three on the diagonal and the others being off-diagonal. Thus it can be expressed, in Mandel notation, as the vector σ ~ M = ⟨ σ11, σ22, thus, in Mandel notation, it can be expressed as D ~ M =. The notation is named after physicist Woldemar Voigt and it is useful, for example, in calculations involving constitutive models to simulate materials, such as the generalized Hookes law, as well as finite element analysis, and Diffusion MRI. Hookes law has a symmetric fourth-order stiffness tensor with 81 components, Voigt notation enables this to be simplified to a 6×6 matrix. However, Voigts form does not preserve the sum of the squares and this explains why weights are introduced. A discussion of invariance of Voigts notation and Mandels notation be found in Helnwein

25.
Tensor density
–
In differential geometry, a tensor density or relative tensor is a generalization of the tensor concept. A distinction is made among tensor densities, pseudotensor densities, even tensor densities, sometimes tensor densities with a negative weight W are called tensor capacity. A tensor density can also be regarded as a section of the product of a tensor bundle with a density bundle. Some authors classify tensor densities into the two types called tensor densities and pseudotensor densities in this article, other authors classify them differently, into the types called even tensor densities and odd tensor densities. When a tensor density weight is a there is an equivalence between these approaches that depends upon whether the integer is even or odd. Note that these classifications elucidate the different ways that tensor densities may transform somewhat pathologically under orientation-reversing coordinate transformations, regardless of their classifications into these types, there is only one way that tensor densities transform under orientation-preserving coordinate transformations. In this article we have chosen the convention that assigns a weight of +2 to the determinant of the metric tensor expressed with covariant indices, with this choice, classical densities, like charge density, will be represented by tensor densities of weight +1. Some authors use a convention for weights that is the negation of that presented here. Because the determinant can be negative, which it is for a coordinate transformation. We say that a tensor density is a pseudotensor density when there is a sign flip under an orientation-reversing coordinate transformation. The transformations for even and odd tensor densities have the benefit of being well defined even when W is not an integer, thus one can speak of, say, an odd tensor density of weight +2 or an even tensor density of weight −1/2. A tensor density of any type that has weight zero is called an absolute tensor. An authentic tensor density of zero is also called an ordinary tensor. If a weight is not specified but the relative or density is used in a context where a specific weight is needed. A linear combination of tensor densities of the type and weight W is again a tensor density of that type. A product of two tensor densities of any types and with weights W1 and W2 is a density of weight W1 + W2. The contraction of indices on a density with weight W again yields a tensor density of weight W. Using and one sees that raising and lowering indices using the metric tensor leaves the weight unchanged

26.
Tensor bundle
–
In mathematics, the tensor product of modules is a construction that allows arguments about bilinear maps to be carried out in terms of linear maps. Tensor products are important in areas of algebra, homological algebra, algebraic topology. The universal property of the product of vector spaces extends to more general situations in abstract algebra. It allows the study of bilinear or multilinear operations via linear operations, the tensor product of an algebra and a module can be used for extension of scalars. For a commutative ring, the product of modules can be iterated to form the tensor algebra of a module. If φ, ψ are balanced products, then the operations φ + ψ and this turns the set LR into an abelian group. For M and N fixed, the map G ↦ LR is a functor from the category of groups to the category of sets. The morphism part is given by mapping a group homomorphism g, G → G′ to the function φ ↦ g ∘ φ, remarks Property states the left and property the right distributivity of φ over addition. Property resembles some associative property of φ, every ring R is an R-R-bimodule. So the ring multiplication ↦ r ⋅ r′ in R is an R-balanced product R × R → R. Indeed, the mapping ⊗ is called canonical, or more explicitly, the definition does not prove the existence of M ⊗R N, see below for a construction. The tensor product can also be defined as an object for the functor G → LR, explicitly. This is a way of stating the universal mapping property given above. Similarly, given the natural identification L R = Hom R and this is known as the tensor-hom adjunction, see also § Properties. For each x in M, y in N, one writes x ⊗ y for the image of under the canonical map ⊗, M × N → M ⊗ R N and it is often called a pure tensor. Strictly speaking, the notation would be x ⊗R y. We have,0 = q ∘ ⊗ as well as 0 =0 ∘ ⊗, hence, by the uniqueness part of the universal property, q =0. The second statement is because to define a homomorphism, it is enough to define it on the generating set of the module. ◻ The proposition says that one can work with elements of the tensor products instead of invoking the universal property directly each time

27.
Operation (mathematics)
–
In mathematics, an operation is a calculation from zero or more input values to an output value. The number of operands is the arity of the operation, the most commonly studied operations are binary operations of arity 2, such as addition and multiplication, and unary operations of arity 1, such as additive inverse and multiplicative inverse. An operation of arity zero, or 0-ary operation is a constant, the mixed product is an example of an operation of arity 3, or ternary operation. Generally, the arity is supposed to be finite, but infinitary operations are sometimes considered, in this context, the usual operations, of finite arity are also called finitary operations. There are two types of operations, unary and binary. Unary operations involve only one value, such as negation and trigonometric functions, binary operations, on the other hand, take two values, and include addition, subtraction, multiplication, division, and exponentiation. Operations can involve mathematical objects other than numbers, the logical values true and false can be combined using logic operations, such as and, or, and not. Vectors can be added and subtracted, rotations can be combined using the function composition operation, performing the first rotation and then the second. Operations on sets include the binary operations union and intersection and the operation of complementation. Operations on functions include composition and convolution, operations may not be defined for every possible value. For example, in the numbers one cannot divide by zero or take square roots of negative numbers. The values for which an operation is defined form a set called its domain, the set which contains the values produced is called the codomain, but the set of actual values attained by the operation is its range. For example, in the numbers, the squaring operation only produces non-negative numbers. A vector can be multiplied by a scalar to form another vector, and the inner product operation on two vectors produces a scalar. An operation may or may not have certain properties, for example it may be associative, commutative, anticommutative, idempotent, the values combined are called operands, arguments, or inputs, and the value produced is called the value, result, or output. Operations can have fewer or more than two inputs, an operation is like an operator, but the point of view is different. An operation ω is a function of the form ω, V → Y, where V ⊂ X1 × … × Xk. The sets Xk are called the domains of the operation, the set Y is called the codomain of the operation, thus a unary operation has arity one, and a binary operation has arity two

28.
Tensor product
–
The tensor product space is thus the freest such vector space, in the sense of having the least constraints. The tensor product of vector spaces has dimension equal to the product of the dimensions of the two factors, dim = dim V dim W. In particular, this distinguishes the tensor product from the direct sum vector space, in each such case the tensor product is characterized by a similar universal property, it is the freest bilinear operation. The general concept of a product is captured by monoidal categories, that is. The ⊠ variant of ⊗ is used in control theory, the tensor product of two vector spaces V and W over a field K is another vector space over K. It is denoted V ⊗K W, or V ⊗ W when the underlying field K is understood and this product operation ⊗, V × W → V ⊗ W is quickly verified to be bilinear. As an example, letting V = W = R3 and considering the standard set for each, the tensor product V ⊗ W is spanned by the nine basis vectors. For vectors v =, w = ∈ R3, the product v ⊗ w = x ^ ⊗ x ^ +2 y ^ ⊗ x ^ +3 z ^ ⊗ x ^. The above definition relies on a choice of basis, which can not be done canonically for a vector space. However, any two choices of basis lead to isomorphic tensor product spaces, alternatively, the tensor product may be defined in an expressly basis-independent manner as a quotient space of a free vector space over V × W. It is a space over K with the usual addition. It has a basis parameterized by S, because of this explicit expression, an element of F is often called a formal sum of symbols in S. By construction, the dimension of the vector space F equals the cardinality of the set S, let us first consider a special case, let us say V, W are free vector spaces for the sets S, T respectively. In this special case, the product is defined as F ⊗ F = F. In most typical cases, any space can be immediately understood as the free vector space for some set. However, there is also a way of constructing the tensor product directly from V, W. In other words, the operations are well-defined, in this way, because it is a quotient of the free vector space by the subspace generated by the relations, it is the freest such vector space. For this reason, the tensor product V ⊗ W can also be characterised by a universal property, the following expression explicitly gives the subspace N, N = span