1.
Mathematics
–
Mathematics is the study of topics such as quantity, structure, space, and change. There is a range of views among mathematicians and philosophers as to the exact scope, Mathematicians seek out patterns and use them to formulate new conjectures. Mathematicians resolve the truth or falsity of conjectures by mathematical proof, when mathematical structures are good models of real phenomena, then mathematical reasoning can provide insight or predictions about nature. Through the use of abstraction and logic, mathematics developed from counting, calculation, measurement, practical mathematics has been a human activity from as far back as written records exist. The research required to solve mathematical problems can take years or even centuries of sustained inquiry, rigorous arguments first appeared in Greek mathematics, most notably in Euclids Elements. Galileo Galilei said, The universe cannot be read until we have learned the language and it is written in mathematical language, and the letters are triangles, circles and other geometrical figures, without which means it is humanly impossible to comprehend a single word. Without these, one is wandering about in a dark labyrinth, carl Friedrich Gauss referred to mathematics as the Queen of the Sciences. Benjamin Peirce called mathematics the science that draws necessary conclusions, David Hilbert said of mathematics, We are not speaking here of arbitrariness in any sense. Mathematics is not like a game whose tasks are determined by arbitrarily stipulated rules, rather, it is a conceptual system possessing internal necessity that can only be so and by no means otherwise. Albert Einstein stated that as far as the laws of mathematics refer to reality, they are not certain, Mathematics is essential in many fields, including natural science, engineering, medicine, finance and the social sciences. Applied mathematics has led to entirely new mathematical disciplines, such as statistics, Mathematicians also engage in pure mathematics, or mathematics for its own sake, without having any application in mind. There is no clear line separating pure and applied mathematics, the history of mathematics can be seen as an ever-increasing series of abstractions. The earliest uses of mathematics were in trading, land measurement, painting and weaving patterns, in Babylonian mathematics elementary arithmetic first appears in the archaeological record. Numeracy pre-dated writing and numeral systems have many and diverse. Between 600 and 300 BC the Ancient Greeks began a study of mathematics in its own right with Greek mathematics. Mathematics has since been extended, and there has been a fruitful interaction between mathematics and science, to the benefit of both. Mathematical discoveries continue to be made today, the overwhelming majority of works in this ocean contain new mathematical theorems and their proofs. The word máthēma is derived from μανθάνω, while the modern Greek equivalent is μαθαίνω, in Greece, the word for mathematics came to have the narrower and more technical meaning mathematical study even in Classical times
2.
Vector space
–
A vector space is a collection of objects called vectors, which may be added together and multiplied by numbers, called scalars in this context. Scalars are often taken to be numbers, but there are also vector spaces with scalar multiplication by complex numbers, rational numbers. The operations of addition and scalar multiplication must satisfy certain requirements, called axioms. Euclidean vectors are an example of a vector space and they represent physical quantities such as forces, any two forces can be added to yield a third, and the multiplication of a force vector by a real multiplier is another force vector. In the same vein, but in a more geometric sense, Vector spaces are the subject of linear algebra and are well characterized by their dimension, which, roughly speaking, specifies the number of independent directions in the space. Infinite-dimensional vector spaces arise naturally in mathematical analysis, as function spaces and these vector spaces are generally endowed with additional structure, which may be a topology, allowing the consideration of issues of proximity and continuity. Among these topologies, those that are defined by a norm or inner product are commonly used. This is particularly the case of Banach spaces and Hilbert spaces, historically, the first ideas leading to vector spaces can be traced back as far as the 17th centurys analytic geometry, matrices, systems of linear equations, and Euclidean vectors. Today, vector spaces are applied throughout mathematics, science and engineering, furthermore, vector spaces furnish an abstract, coordinate-free way of dealing with geometrical and physical objects such as tensors. This in turn allows the examination of local properties of manifolds by linearization techniques, Vector spaces may be generalized in several ways, leading to more advanced notions in geometry and abstract algebra. The concept of space will first be explained by describing two particular examples, The first example of a vector space consists of arrows in a fixed plane. This is used in physics to describe forces or velocities, given any two such arrows, v and w, the parallelogram spanned by these two arrows contains one diagonal arrow that starts at the origin, too. This new arrow is called the sum of the two arrows and is denoted v + w, when a is negative, av is defined as the arrow pointing in the opposite direction, instead. Such a pair is written as, the sum of two such pairs and multiplication of a pair with a number is defined as follows, + = and a =. The first example above reduces to one if the arrows are represented by the pair of Cartesian coordinates of their end points. A vector space over a field F is a set V together with two operations that satisfy the eight axioms listed below, elements of V are commonly called vectors. Elements of F are commonly called scalars, the second operation, called scalar multiplication takes any scalar a and any vector v and gives another vector av. In this article, vectors are represented in boldface to distinguish them from scalars
3.
Euclidean vector
–
In mathematics, physics, and engineering, a Euclidean vector is a geometric object that has magnitude and direction. Vectors can be added to other vectors according to vector algebra, a Euclidean vector is frequently represented by a line segment with a definite direction, or graphically as an arrow, connecting an initial point A with a terminal point B, and denoted by A B →. A vector is what is needed to carry the point A to the point B and it was first used by 18th century astronomers investigating planet rotation around the Sun. The magnitude of the vector is the distance between the two points and the direction refers to the direction of displacement from A to B. These operations and associated laws qualify Euclidean vectors as an example of the more generalized concept of vectors defined simply as elements of a vector space. Vectors play an important role in physics, the velocity and acceleration of a moving object, many other physical quantities can be usefully thought of as vectors. Although most of them do not represent distances, their magnitude and direction can still be represented by the length, the mathematical representation of a physical vector depends on the coordinate system used to describe it. Other vector-like objects that describe physical quantities and transform in a similar way under changes of the system include pseudovectors and tensors. The concept of vector, as we know it today, evolved gradually over a period of more than 200 years, about a dozen people made significant contributions. Giusto Bellavitis abstracted the basic idea in 1835 when he established the concept of equipollence, working in a Euclidean plane, he made equipollent any pair of line segments of the same length and orientation. Essentially he realized an equivalence relation on the pairs of points in the plane, the term vector was introduced by William Rowan Hamilton as part of a quaternion, which is a sum q = s + v of a Real number s and a 3-dimensional vector. Like Bellavitis, Hamilton viewed vectors as representative of classes of equipollent directed segments, grassmanns work was largely neglected until the 1870s. Peter Guthrie Tait carried the standard after Hamilton. His 1867 Elementary Treatise of Quaternions included extensive treatment of the nabla or del operator ∇, in 1878 Elements of Dynamic was published by William Kingdon Clifford. Clifford simplified the quaternion study by isolating the dot product and cross product of two vectors from the complete quaternion product and this approach made vector calculations available to engineers and others working in three dimensions and skeptical of the fourth. Josiah Willard Gibbs, who was exposed to quaternions through James Clerk Maxwells Treatise on Electricity and Magnetism, the first half of Gibbss Elements of Vector Analysis, published in 1881, presents what is essentially the modern system of vector analysis. In 1901 Edwin Bidwell Wilson published Vector Analysis, adapted from Gibbs lectures, in physics and engineering, a vector is typically regarded as a geometric entity characterized by a magnitude and a direction. It is formally defined as a line segment, or arrow
4.
Norm (mathematics)
–
A seminorm, on the other hand, is allowed to assign zero length to some non-zero vectors. A norm must also satisfy certain properties pertaining to scalability and additivity which are given in the definition below. A simple example is the 2-dimensional Euclidean space R2 equipped with the Euclidean norm, elements in this vector space are usually drawn as arrows in a 2-dimensional cartesian coordinate system starting at the origin. The Euclidean norm assigns to each vector the length of its arrow, because of this, the Euclidean norm is often known as the magnitude. A vector space on which a norm is defined is called a vector space. Similarly, a space with a seminorm is called a seminormed vector space. It is often possible to supply a norm for a vector space in more than one way. If p =0 then v is the zero vector, by the first axiom, absolute homogeneity, we have p =0 and p = p, so that by the triangle inequality p ≥0. A seminorm on V is a p, V → R with the properties 1. and 2. Every vector space V with seminorm p induces a normed space V/W, called the quotient space, the induced norm on V/W is clearly well-defined and is given by, p = p. A topological vector space is called if the topology of the space can be induced by a norm. If a norm p, V → R is given on a vector space V then the norm of a vector v ∈ V is usually denoted by enclosing it within double vertical lines, such notation is also sometimes used if p is only a seminorm. For the length of a vector in Euclidean space, the notation | v | with single vertical lines is also widespread, in Unicode, the codepoint of the double vertical line character ‖ is U+2016. The double vertical line should not be confused with the parallel to symbol and this is usually not a problem because the former is used in parenthesis-like fashion, whereas the latter is used as an infix operator. The double vertical line used here should not be confused with the symbol used to denote lateral clicks. The single vertical line | is called vertical line in Unicode, the trivial seminorm has p =0 for all x in V. Every linear form f on a vector space defines a seminorm by x → | f |, the absolute value ∥ x ∥ = | x | is a norm on the one-dimensional vector spaces formed by the real or complex numbers. The absolute value norm is a case of the L1 norm
5.
Direction vector
–
In mathematics, a direction vector that describes a line D is any vector A B → where A and B are two distinct points on the line. If v is a vector for D, so is kv for any nonzero scalar k. Under some definitions, the vector is required to be a unit vector. In Euclidean space, given a point p0 and a direction vector d, a line is defined parametrically as the set of points p where p = p0+ td, and this line formalism demonstrates use of direction vector d to specify the run direction of the line. The line equation p0+td is a form, but not a predicate form. An example of a form of the vector line equation in 2D is, p • o == L Here. O is the orientation, a normalized direction vector pointing perpendicular to its run direction. Numerical algorithms benefit by avoiding such ill-behaved exceptions, the 2nd feature of a 2D line represented this way is its location L. Intuitively and visually, L is the signed distance of the line from the origin. Orientation must be solved before determining location, once o is known, L can be computed given any known point p on the line, L ← p • o Lines may be represented as feature pair in all cases. Every line has an equivalent representation, the general form is, R p == L where R is the matrix rotation that aligns the line with an axis, and L is the invariant vector Location of points on the line under this rotation. Direction Unit vector Weisstein, Eric W. Direction vector, glossary, Nipissing University Finding the vector equation of a line Lines in a plane - Orthogonality, Distances, MATH-tutorial Coordinate Systems, Points, Lines and Planes
6.
Unit circle
–
In mathematics, a unit circle is a circle with a radius of one. Frequently, especially in trigonometry, the circle is the circle of radius one centered at the origin in the Cartesian coordinate system in the Euclidean plane. The unit circle is often denoted S1, the generalization to higher dimensions is the unit sphere, if is a point on the unit circles circumference, then | x | and | y | are the lengths of the legs of a right triangle whose hypotenuse has length 1. Thus, by the Pythagorean theorem, x and y satisfy the equation x 2 + y 2 =1. The interior of the circle is called the open unit disk. One may also use other notions of distance to define other unit circles, such as the Riemannian circle, see the article on mathematical norms for additional examples. The unit circle can be considered as the complex numbers. In quantum mechanics, this is referred to as phase factor, the equation x2 + y2 =1 gives the relation cos 2 + sin 2 =1. The unit circle also demonstrates that sine and cosine are periodic functions, triangles constructed on the unit circle can also be used to illustrate the periodicity of the trigonometric functions. First, construct a radius OA from the origin to a point P on the circle such that an angle t with 0 < t < π/2 is formed with the positive arm of the x-axis. Now consider a point Q and line segments PQ ⊥ OQ, the result is a right triangle △OPQ with ∠QOP = t. Because PQ has length y1, OQ length x1, and OA length 1, sin = y1 and cos = x1. Having established these equivalences, take another radius OR from the origin to a point R on the circle such that the same angle t is formed with the arm of the x-axis. Now consider a point S and line segments RS ⊥ OS, the result is a right triangle △ORS with ∠SOR = t. It can hence be seen that, because ∠ROQ = π − t, R is at in the way that P is at. The conclusion is that, since is the same as and is the same as, it is true that sin = sin and it may be inferred in a similar manner that tan = −tan, since tan = y1/x1 and tan = y1/−x1. A simple demonstration of the above can be seen in the equality sin = sin = 1/√2, when working with right triangles, sine, cosine, and other trigonometric functions only make sense for angle measures more than zero and less than π/2. However, when defined with the circle, these functions produce meaningful values for any real-valued angle measure – even those greater than 2π
7.
Unit sphere
–
Usually a specific point has been distinguished as the origin of the space under study and it is understood that a unit sphere or unit ball is centered at that point. Therefore one speaks of the ball or the unit sphere. For example, a sphere is the surface of what is commonly called a circle, while such a circles interior. Similarly, a sphere is the surface of the Euclidean solid known colloquially as a sphere, while the interior. A unit sphere is simply a sphere of radius one, the importance of the unit sphere is that any sphere can be transformed to a unit sphere by a combination of translation and scaling. In this way the properties of spheres in general can be reduced to the study of the unit sphere. In Euclidean space of n dimensions, the sphere is the set of all points which satisfy the equation x 12 + x 22 + ⋯ + x n 2 =1. The volume of the ball in n dimensions, which we denote Vn. It is V n = π n /2 Γ = { π n /2 /, I f n ≥0 i s e v e n, π ⌊ n /2 ⌋2 ⌈ n /2 ⌉ / n. I f n ≥0 i s o d d, where n. is the double factorial, the surface areas and the volumes for some values of n are as follows, where the decimal expanded values for n ≥2 are rounded to the displayed precision. The An values satisfy the recursion, A0 =0 A1 =2 A2 =2 π A n =2 π n −2 A n −2 for n >2. The Vn values satisfy the recursion, V0 =1 V1 =2 V n =2 π n V n −2 for n >1. The surface area of a sphere with radius r is An rn−1. For instance, the area is A = 4π r 2 for the surface of the ball of radius r. The volume is V = 4π r 3 /3 for the ball of radius r. More precisely, the unit ball in a normed vector space V. It is the interior of the unit ball of. The latter is the disjoint union of the former and their common border, the shape of the unit ball is entirely dependent on the chosen norm, it may well have corners, and for example may look like n, in the case of the norm l∞ in Rn
8.
Versor
–
Versors are an algebraic parametrisation of rotations. In classical quaternion theory a versor is a quaternion of norm one. Each versor has the form q = exp = cos a + r sin a, r 2 = −1, a ∈, in case a = π/2, the versor is termed a right versor. The corresponding 3-dimensional rotation has the angle 2a about the axis r in axis–angle representation, the word is derived from Latin versare = to turn with the suffix -or forming a noun from the verb. It was introduced by William Rowan Hamilton in the context of his quaternion theory, for historical reasons, it sometimes is used synonymously with a unit quaternion without a reference to rotations. In the quaternion algebra a versor q = exp will rotate any quaternion v through the product map v ↦ q v q −1 such that the scalar part of v is preserved. If this scalar part is zero, i. e. v is a Euclidean vector in three dimensions, then the formula above defines the rotation through the angle 2a around the vector r. In other words, qvq−1 rotates the vector part of v around the vector r, see quaternions and spatial rotation for details. A quaternionic versor expressed in the complex 2×2 matrix representation is an element of the unitary group SU. Spin and SU are the same group, angles of rotation in this λ = 1/2 representation are equal to a, there is no 2 factor in angles unlike the λ =1 adjoint representation mentioned above, see representation theory of SU for details. For a fixed r, versors of the form exp where a ∈ (−π, π], in 2003 David W. Lyons wrote the fibers of the Hopf map are circles in S3. Lyons gives an introduction to quaternions to elucidate the Hopf fibration as a mapping on unit quaternions. Hamilton denoted the versor of a quaternion q by the symbol Uq and he was then able to display the general quaternion in polar coordinate form q = Tq Uq, where Tq is the norm of q. The norm of a versor is always equal to one, hence they occupy the unit 3-sphere in H, examples of versors include the eight elements of the quaternion group. Of particular importance are the right versors, which have angle π/2 and these versors have zero scalar part, and so are vectors of length one. The right versors form a sphere of square roots of −1 in the quaternion algebra, the generators i, j, and k are examples of right versors, as well as their additive inverses. Other versors include the twenty-four Hurwitz quaternions that have the norm 1, Hamilton defined a quaternion as the quotient of two vectors. A versor can be defined as the quotient of two unit vectors, for any fixed plane Π the quotient of two unit vectors lying in Π depends only on the angle between them, the same a as in the unit vector–angle representation of a versor explained above
9.
Basis (linear algebra)
–
In more general terms, a basis is a linearly independent spanning set. Given a basis of a vector space V, every element of V can be expressed uniquely as a combination of basis vectors. A vector space can have distinct sets of basis vectors, however each such set has the same number of elements. A basis B of a vector space V over a field F is an independent subset of V that spans V. In more detail, suppose that B = is a subset of a vector space V over a field F. The numbers ai are called the coordinates of the vector x with respect to the basis B, a vector space that has a finite basis is called finite-dimensional. To deal with infinite-dimensional spaces, we must generalize the definition to include infinite basis sets. The sums in the definition are all finite because without additional structure the axioms of a vector space do not permit us to meaningfully speak about an infinite sum of vectors. Settings that permit infinite linear combinations allow alternative definitions of the basis concept and it is often convenient to list the basis vectors in a specific order, for example, when considering the transformation matrix of a linear map with respect to a basis. We then speak of a basis, which we define to be a sequence of linearly independent vectors that span V. B is a set of linearly independent vectors, i. e. it is a linearly independent set. Every vector in V can be expressed as a combination of vectors in B in a unique way. If the basis is ordered then the coefficients in this linear combination provide coordinates of the relative to the basis. Every vector space has a basis, the proof of this requires the axiom of choice. All bases of a vector space have the same cardinality, called the dimension of the vector space and this result is known as the dimension theorem, and requires the ultrafilter lemma, a strictly weaker form of the axiom of choice. Also many vector sets can be attributed a standard basis which comprises both spanning and linearly independent vectors, standard bases for example, In Rn, where ei is the ith column of the identity matrix. In P2, where P2 is the set of all polynomials of degree at most 2, is the standard basis. In M22, where M22 is the set of all 2×2 matrices. and Mm, n is the 2×2 matrix with a 1 in the m, n position, given a vector space V over a field F and suppose that and are two bases for V
10.
Euclidean space
–
In geometry, Euclidean space encompasses the two-dimensional Euclidean plane, the three-dimensional space of Euclidean geometry, and certain other spaces. It is named after the Ancient Greek mathematician Euclid of Alexandria, the term Euclidean distinguishes these spaces from other types of spaces considered in modern geometry. Euclidean spaces also generalize to higher dimensions, classical Greek geometry defined the Euclidean plane and Euclidean three-dimensional space using certain postulates, while the other properties of these spaces were deduced as theorems. Geometric constructions are used to define rational numbers. It means that points of the space are specified with collections of real numbers and this approach brings the tools of algebra and calculus to bear on questions of geometry and has the advantage that it generalizes easily to Euclidean spaces of more than three dimensions. From the modern viewpoint, there is only one Euclidean space of each dimension. With Cartesian coordinates it is modelled by the coordinate space of the same dimension. In one dimension, this is the line, in two dimensions, it is the Cartesian plane, and in higher dimensions it is a coordinate space with three or more real number coordinates. One way to think of the Euclidean plane is as a set of points satisfying certain relationships, expressible in terms of distance, for example, there are two fundamental operations on the plane. One is translation, which means a shifting of the plane so that point is shifted in the same direction. The other is rotation about a point in the plane. In order to all of this mathematically precise, the theory must clearly define the notions of distance, angle, translation. Even when used in theories, Euclidean space is an abstraction detached from actual physical locations, specific reference frames, measurement instruments. The standard way to such space, as carried out in the remainder of this article, is to define the Euclidean plane as a two-dimensional real vector space equipped with an inner product. The reason for working with vector spaces instead of Rn is that it is often preferable to work in a coordinate-free manner. Once the Euclidean plane has been described in language, it is actually a simple matter to extend its concept to arbitrary dimensions. For the most part, the vocabulary, formulae, and calculations are not made any more difficult by the presence of more dimensions. Intuitively, the distinction says merely that there is no choice of where the origin should go in the space
11.
Dot product
–
In mathematics, the dot product or scalar product is an algebraic operation that takes two equal-length sequences of numbers and returns a single number. Sometimes it is called inner product in the context of Euclidean space, algebraically, the dot product is the sum of the products of the corresponding entries of the two sequences of numbers. Geometrically, it is the product of the Euclidean magnitudes of the two vectors and the cosine of the angle between them, the dot product may be defined algebraically or geometrically. The geometric definition is based on the notions of angle and distance, the equivalence of these two definitions relies on having a Cartesian coordinate system for Euclidean space. In such a presentation, the notions of length and angles are not primitive, so the equivalence of the two definitions of the dot product is a part of the equivalence of the classical and the modern formulations of Euclidean geometry. For instance, in space, the dot product of vectors and is. In Euclidean space, a Euclidean vector is an object that possesses both a magnitude and a direction. A vector can be pictured as an arrow and its magnitude is its length, and its direction is the direction that the arrow points. The magnitude of a vector a is denoted by ∥ a ∥, the dot product of two Euclidean vectors a and b is defined by a ⋅ b = ∥ a ∥ ∥ b ∥ cos , where θ is the angle between a and b. In particular, if a and b are orthogonal, then the angle between them is 90° and a ⋅ b =0. The scalar projection of a Euclidean vector a in the direction of a Euclidean vector b is given by a b = ∥ a ∥ cos θ, where θ is the angle between a and b. In terms of the definition of the dot product, this can be rewritten a b = a ⋅ b ^. The dot product is thus characterized geometrically by a ⋅ b = a b ∥ b ∥ = b a ∥ a ∥. The dot product, defined in this manner, is homogeneous under scaling in each variable and it also satisfies a distributive law, meaning that a ⋅ = a ⋅ b + a ⋅ c. These properties may be summarized by saying that the dot product is a bilinear form, moreover, this bilinear form is positive definite, which means that a ⋅ a is never negative and is zero if and only if a =0. En are the basis vectors in Rn, then we may write a = = ∑ i a i e i b = = ∑ i b i e i. The vectors ei are a basis, which means that they have unit length and are at right angles to each other. Hence since these vectors have unit length e i ⋅ e i =1 and since they form right angles with each other, thus in general we can say that, e i ⋅ e j = δ i j
12.
Trigonometric functions
–
In planar geometry, an angle is the figure formed by two rays, called the sides of the angle, sharing a common endpoint, called the vertex of the angle. Angles formed by two rays lie in a plane, but this plane does not have to be a Euclidean plane, Angles are also formed by the intersection of two planes in Euclidean and other spaces. Angles formed by the intersection of two curves in a plane are defined as the angle determined by the tangent rays at the point of intersection. Similar statements hold in space, for example, the angle formed by two great circles on a sphere is the dihedral angle between the planes determined by the great circles. Angle is also used to designate the measure of an angle or of a rotation and this measure is the ratio of the length of a circular arc to its radius. In the case of an angle, the arc is centered at the vertex. In the case of a rotation, the arc is centered at the center of the rotation and delimited by any other point and its image by the rotation. The word angle comes from the Latin word angulus, meaning corner, cognate words are the Greek ἀγκύλος, meaning crooked, curved, both are connected with the Proto-Indo-European root *ank-, meaning to bend or bow. Euclid defines a plane angle as the inclination to each other, in a plane, according to Proclus an angle must be either a quality or a quantity, or a relationship. In mathematical expressions, it is common to use Greek letters to serve as variables standing for the size of some angle, lower case Roman letters are also used, as are upper case Roman letters in the context of polygons. See the figures in this article for examples, in geometric figures, angles may also be identified by the labels attached to the three points that define them. For example, the angle at vertex A enclosed by the rays AB, sometimes, where there is no risk of confusion, the angle may be referred to simply by its vertex. However, in geometrical situations it is obvious from context that the positive angle less than or equal to 180 degrees is meant. Otherwise, a convention may be adopted so that ∠BAC always refers to the angle from B to C. Angles smaller than an angle are called acute angles. An angle equal to 1/4 turn is called a right angle, two lines that form a right angle are said to be normal, orthogonal, or perpendicular. Angles larger than an angle and smaller than a straight angle are called obtuse angles. An angle equal to 1/2 turn is called a straight angle, Angles larger than a straight angle but less than 1 turn are called reflex angles
13.
Cross product
–
In mathematics and vector algebra, the cross product or vector product is a binary operation on two vectors in three-dimensional space and is denoted by the symbol ×. Given two linearly independent vectors a and b, the product, a × b, is a vector that is perpendicular to both a and b and therefore normal to the plane containing them. It has many applications in mathematics, physics, engineering, and it should not be confused with dot product. If two vectors have the direction or if either one has zero length, then their cross product is zero. The cross product is anticommutative and is distributive over addition, the space R3 together with the cross product is an algebra over the real numbers, which is neither commutative nor associative, but is a Lie algebra with the cross product being the Lie bracket. Like the dot product, it depends on the metric of Euclidean space, but if the product is limited to non-trivial binary products with vector results, it exists only in three and seven dimensions. If one adds the further requirement that the product be uniquely defined, the cross product of two vectors a and b is defined only in three-dimensional space and is denoted by a × b. In physics, sometimes the notation a ∧ b is used, if the vectors a and b are parallel, by the above formula, the cross product of a and b is the zero vector 0. Then, the n is coming out of the thumb. Using this rule implies that the cross-product is anti-commutative, i. e. b × a = −. By pointing the forefinger toward b first, and then pointing the finger toward a. Using the cross product requires the handedness of the system to be taken into account. If a left-handed coordinate system is used, the direction of the n is given by the left-hand rule. This, however, creates a problem because transforming from one arbitrary reference system to another, the problem is clarified by realizing that the cross product of two vectors is not a vector, but rather a pseudovector. See cross product and handedness for more detail, in 1881, Josiah Willard Gibbs, and independently Oliver Heaviside, introduced both the dot product and the cross product using a period and an x, respectively, to denote them. These alternative names are widely used in the literature. Both the cross notation and the cross product were possibly inspired by the fact that each scalar component of a × b is computed by multiplying non-corresponding components of a and b. Conversely, a dot product a ⋅ b involves multiplications between corresponding components of a and b, as explained below, the cross product can be expressed in the form of a determinant of a special 3 ×3 matrix
14.
Right-hand rule
–
In mathematics and physics, the right-hand rule is a common mnemonic for understanding orientation conventions for vectors in three dimensions. Most of the left and right-hand rules arise from the fact that the three axes of 3-dimensional space have two possible orientations. This can be seen by holding your hands outward and together, palms up, with the fingers curled. If the curl of your fingers represents a movement from the first or X axis to the second or Y axis then the third or Z axis can point either along your left thumb or right thumb. Left and right-hand rules arise when dealing with co-ordinate axes, rotation, spirals, electromagnetic fields, mirror images and enantiomers in mathematics and chemistry. For right-handed coordinates your right thumb points along the Z axis in a positive Z-direction, when viewed from the top or Z axis the system is counter-clockwise. When viewed from the top or Z axis the system is clockwise, interchanging the labels of any two axes reverses the handedness. Reversing the direction of one axis also reverses the handedness, reversing two axes amounts to a 180° rotation around the remaining axis. In mathematics a rotating body is represented by a vector along the axis of rotation. This allows some easy calculations using the cross product. Note that no part of the body is moving in the direction of the axis arrow, by coincidence, if your thumb points north the earth rotates according to the right-hand rule. This causes the sun and stars to appear to revolve according to the left-hand rule, a helix, to use a more accurate term than spiral, is basically a circular curve that advances along the z-axis while rotating in the x-y plane. Helices are either right- or left-handed, curled fingers giving the direction of rotation, the two types are mirror images of each other, physically distinct and cannot be transformed into each other by any physical operation such as turning them over. The threads on a right-handed screw are a right-handed helix and they are basically a long inclined plane wrapped around a cylinder such that turning the screw advances the screw back and forth along the z-axis. From the point of view of the threads, turning the screw forces the screw up or down the inclined plane. If a screw is right-handed the rule is this, point your right thumb in the direction you want the screw to go and turn the screw in the direction of your curled right fingers. Viewed from the earth, the path of an object moving in a straight line appears to bend to the right in the northern hemisphere. This causes low-pressure areas in the northern hemispheres to rotate according to the right-hand rule, handedness is not obvious here but it is clear in the underlying mathematics
15.
Standard basis
–
In mathematics, the standard basis for a Euclidean space is the set of unit vectors pointing in the direction of the axes of a Cartesian coordinate system. For example, the basis for the Euclidean plane is formed by vectors e x =, e y =. Here the vector ex points in the x direction, the vector ey points in the y direction, there are several common notations for these vectors, including, and. These vectors are written with a hat to emphasize their status as unit vectors. Each of these vectors is sometimes referred to as the versor of the corresponding Cartesian axis and these vectors are a basis in the sense that any other vector can be expressed uniquely as a linear combination of these. For example, every vector v in three-dimensional space can be written uniquely as v x e x + v y e y + v z e z, the scalars vx, vy, vz being the scalar components of the vector v. In n -dimensional Euclidean space, the standard consists of n distinct vectors. Standard bases can be defined for vector spaces, such as polynomials. In both cases, the standard consists of the elements of the vector space such that all coefficients but one are 0. For polynomials, the standard basis consists of the monomials and is commonly called monomial basis. For matrices M m × n, the standard consists of the m×n-matrices with exactly one non-zero entry. For example, the basis for 2×2 matrices is formed by the 4 matrices e 11 =, e 12 =, e 21 =, e 22 =. By definition, the basis is a sequence of orthogonal unit vectors. In other words, it is an ordered and orthonormal basis, however, an ordered orthonormal basis is not necessarily a standard basis. For instance the two vectors representing a 30° rotation of the 2D standard basis described above, i. e, there is a standard basis also for the ring of polynomials in n indeterminates over a field, namely the monomials. This family is the basis of the R-module R of all families f = from I into a ring R, which are zero except for a finite number of indices, if we interpret 1 as 1R. The existence of standard bases has become a topic of interest in algebraic geometry. It is now a part of theory called standard monomial theory
16.
Versor (physics)
–
In geometry and physics, the versor of an axis or of a vector is a unit vector indicating its direction. The versor of a Cartesian axis is known as a standard basis vector. The versor of a vector is known as a normalized vector. The versors of the axes of a Cartesian coordinate system are the unit vectors codirectional with the axes of that system, every Euclidean vector a in a n-dimensional Euclidean space can be represented as a linear combination of the n versors of the corresponding Cartesian coordinate system. For instance, in a space, there are three versors, i =, j =, k =. They indicate the direction of the Cartesian axes x, y, in linear algebra, the set formed by these n versors is typically referred to as the standard basis of the corresponding Euclidean space, and each of them is commonly called a standard basis vector. A hat above the symbol of a versor is sometimes used to emphasize its status as a unit vector, in most contexts it can be assumed that i, j, and k, are versors of a 3-D Cartesian coordinate system. The notations, or, with or without hat, are also used and this is recommended, for instance, when index symbols such as i, j, k are used to identify an element of a set of variables. The versor u ^ of a vector u is the unit vector codirectional with u, u ^ = u ∥ u ∥. Where ∥ u ∥ is the norm of u
17.
Cartesian coordinate system
–
Each reference line is called a coordinate axis or just axis of the system, and the point where they meet is its origin, usually at ordered pair. The coordinates can also be defined as the positions of the projections of the point onto the two axis, expressed as signed distances from the origin. One can use the principle to specify the position of any point in three-dimensional space by three Cartesian coordinates, its signed distances to three mutually perpendicular planes. In general, n Cartesian coordinates specify the point in an n-dimensional Euclidean space for any dimension n and these coordinates are equal, up to sign, to distances from the point to n mutually perpendicular hyperplanes. The invention of Cartesian coordinates in the 17th century by René Descartes revolutionized mathematics by providing the first systematic link between Euclidean geometry and algebra. Using the Cartesian coordinate system, geometric shapes can be described by Cartesian equations, algebraic equations involving the coordinates of the points lying on the shape. For example, a circle of radius 2, centered at the origin of the plane, a familiar example is the concept of the graph of a function. Cartesian coordinates are also tools for most applied disciplines that deal with geometry, including astronomy, physics, engineering. They are the most common system used in computer graphics, computer-aided geometric design. Nicole Oresme, a French cleric and friend of the Dauphin of the 14th Century, used similar to Cartesian coordinates well before the time of Descartes. The adjective Cartesian refers to the French mathematician and philosopher René Descartes who published this idea in 1637 and it was independently discovered by Pierre de Fermat, who also worked in three dimensions, although Fermat did not publish the discovery. Both authors used a single axis in their treatments and have a length measured in reference to this axis. The concept of using a pair of axes was introduced later, after Descartes La Géométrie was translated into Latin in 1649 by Frans van Schooten and these commentators introduced several concepts while trying to clarify the ideas contained in Descartes work. Many other coordinate systems have developed since Descartes, such as the polar coordinates for the plane. The development of the Cartesian coordinate system would play a role in the development of the Calculus by Isaac Newton. The two-coordinate description of the plane was later generalized into the concept of vector spaces. Choosing a Cartesian coordinate system for a one-dimensional space – that is, for a straight line—involves choosing a point O of the line, a unit of length, and an orientation for the line. An orientation chooses which of the two half-lines determined by O is the positive, and which is negative, we say that the line is oriented from the negative half towards the positive half
18.
Orthogonality
–
The concept of orthogonality has been broadly generalized in mathematics, as well as in areas such as chemistry, and engineering. The word comes from the Greek ὀρθός, meaning upright, and γωνία, the ancient Greek ὀρθογώνιον orthogōnion and classical Latin orthogonium originally denoted a rectangle. Later, they came to mean a right triangle, in the 12th century, the post-classical Latin word orthogonalis came to mean a right angle or something related to a right angle. In geometry, two Euclidean vectors are orthogonal if they are perpendicular, i. e. they form a right angle, two vectors, x and y, in an inner product space, V, are orthogonal if their inner product ⟨ x, y ⟩ is zero. This relationship is denoted x ⊥ y, two vector subspaces, A and B, of an inner product space, V, are called orthogonal subspaces if each vector in A is orthogonal to each vector in B. The largest subspace of V that is orthogonal to a subspace is its orthogonal complement. Given a module M and its dual M∗, an element m′ of M∗, two sets S′ ⊆ M∗ and S ⊆ M are orthogonal if each element of S′ is orthogonal to each element of S. A term rewriting system is said to be if it is left-linear and is non-ambiguous. Orthogonal term rewriting systems are confluent, a set of vectors in an inner product space is called pairwise orthogonal if each pairing of them is orthogonal. Such a set is called an orthogonal set, nonzero pairwise orthogonal vectors are always linearly independent. In certain cases, the normal is used to mean orthogonal. For example, the y-axis is normal to the curve y = x2 at the origin, however, normal may also refer to the magnitude of a vector. In particular, a set is called if it is an orthogonal set of unit vectors. As a result, use of the normal to mean orthogonal is often avoided. The word normal also has a different meaning in probability and statistics, a vector space with a bilinear form generalizes the case of an inner product. When the bilinear form applied to two results in zero, then they are orthogonal. The case of a pseudo-Euclidean plane uses the term hyperbolic orthogonality, in the diagram, axes x′ and t′ are hyperbolic-orthogonal for any given ϕ. In 2-D or higher-dimensional Euclidean space, two vectors are orthogonal if and only if their dot product is zero, i. e. they make an angle of 90°, hence orthogonality of vectors is an extension of the concept of perpendicular vectors into higher-dimensional spaces
19.
Linear algebra
–
Linear algebra is the branch of mathematics concerning vector spaces and linear mappings between such spaces. It includes the study of lines, planes, and subspaces, the set of points with coordinates that satisfy a linear equation forms a hyperplane in an n-dimensional space. The conditions under which a set of n hyperplanes intersect in a point is an important focus of study in linear algebra. Such an investigation is initially motivated by a system of linear equations containing several unknowns, such equations are naturally represented using the formalism of matrices and vectors. Linear algebra is central to both pure and applied mathematics, for instance, abstract algebra arises by relaxing the axioms of a vector space, leading to a number of generalizations. Functional analysis studies the infinite-dimensional version of the theory of vector spaces, combined with calculus, linear algebra facilitates the solution of linear systems of differential equations. Because linear algebra is such a theory, nonlinear mathematical models are sometimes approximated by linear models. The study of linear algebra first emerged from the study of determinants, determinants were used by Leibniz in 1693, and subsequently, Gabriel Cramer devised Cramers Rule for solving linear systems in 1750. Later, Gauss further developed the theory of solving linear systems by using Gaussian elimination, the study of matrix algebra first emerged in England in the mid-1800s. In 1844 Hermann Grassmann published his Theory of Extension which included foundational new topics of what is called linear algebra. In 1848, James Joseph Sylvester introduced the term matrix, which is Latin for womb, while studying compositions of linear transformations, Arthur Cayley was led to define matrix multiplication and inverses. Crucially, Cayley used a letter to denote a matrix. In 1882, Hüseyin Tevfik Pasha wrote the book titled Linear Algebra, the first modern and more precise definition of a vector space was introduced by Peano in 1888, by 1900, a theory of linear transformations of finite-dimensional vector spaces had emerged. Linear algebra took its form in the first half of the twentieth century. The use of matrices in quantum mechanics, special relativity, the origin of many of these ideas is discussed in the articles on determinants and Gaussian elimination. Linear algebra first appeared in American graduate textbooks in the 1940s, following work by the School Mathematics Study Group, U. S. high schools asked 12th grade students to do matrix algebra, formerly reserved for college in the 1960s. In France during the 1960s, educators attempted to teach linear algebra through finite-dimensional vector spaces in the first year of secondary school and this was met with a backlash in the 1980s that removed linear algebra from the curriculum. To better suit 21st century applications, such as mining and uncertainty analysis
20.
Direction cosines
–
In analytic geometry, the direction cosines of a vector are the cosines of the angles between the vector and the three coordinate axes. Equivalently, they are the contributions of each component of the basis to a vector in that direction. Direction cosines are an extension of the usual notion of slope to higher dimensions. It follows that by squaring each equation and adding the results cos 2 a + cos 2 b + cos 2 c =1. Here α, β and γ are the direction cosines and the Cartesian coordinates of the unit vector v/|v|, more generally, direction cosine refers to the cosine of the angle between any two vectors. They are useful for forming direction cosine matrices that express one set of basis vectors in terms of another set. Spiegel, M. R. Lipschutz, S. Spellman, an introduction to tensor analysis for engineers and applied scientists. Mathematical Methods for Engineers and Scientists
21.
Orientation (vector space)
–
In linear algebra, the notion of orientation makes sense in arbitrary finite dimension. In this setting, the orientation of a basis is a kind of asymmetry that makes a reflection impossible to replicate by means of a simple rotation. As a result, in the three-dimensional Euclidean space, the two possible basis orientations are called right-handed and left-handed, the orientation on a real vector space is the arbitrary choice of which ordered bases are positively oriented and which are negatively oriented. In the three-dimensional Euclidean space, right-handed bases are typically declared to be positively oriented, a vector space with an orientation selected is called an oriented vector space, while one not having an orientation selected, is called unoriented. Let V be a real vector space and let b1. It is a result in linear algebra that there exists a unique linear transformation A, V → V that takes b1 to b2. The bases b1 and b2 are said to have the same orientation if A has positive determinant, the property of having the same orientation defines an equivalence relation on the set of all ordered bases for V. If V is non-zero, there are two equivalence classes determined by this relation. An orientation on V is an assignment of +1 to one equivalence class, every ordered basis lives in one equivalence class or another. Thus any choice of an ordered basis for V determines an orientation. For example, the basis on Rn provides a standard orientation on Rn. Any choice of an isomorphism between V and Rn will then provide an orientation on V. The ordering of elements in a basis is crucial, two bases with a different ordering will differ by some permutation. They will have the same/opposite orientations according to whether the signature of this permutation is ±1 and this is because the determinant of a permutation matrix is equal to the signature of the associated permutation. Similarly, let A be a linear mapping of vector space Rn to Rn. This mapping is orientation-preserving if its determinant is positive, a zero-dimensional vector space has only a single point, the zero vector. Consequently, the basis of a zero-dimensional vector space is the empty set ∅. Therefore, there is an equivalence class of ordered bases, namely
22.
Jacobian matrix and determinant
–
In vector calculus, the Jacobian matrix is the matrix of all first-order partial derivatives of a vector-valued function. When the matrix is a matrix, both the matrix and its determinant are referred to as the Jacobian in literature. Suppose f, ℝn → ℝm is a function takes as input the vector x ∈ ℝn. Then the Jacobian matrix J of f is an m×n matrix, usually defined and arranged as follows, J = = or, component-wise and this matrix, whose entries are functions of x, is also denoted by Df, Jf, and ∂/∂. This linear map is thus the generalization of the notion of derivative. If m = n, the Jacobian matrix is a matrix, and its determinant. It carries important information about the behavior of f. In particular, the f has locally in the neighborhood of a point x an inverse function that is differentiable if. The Jacobian determinant also appears when changing the variables in multiple integrals, if m =1, f is a scalar field and the Jacobian matrix is reduced to a row vector of partial derivatives of f—i. e. the gradient of f. These concepts are named after the mathematician Carl Gustav Jacob Jacobi, the Jacobian generalizes the gradient of a scalar-valued function of multiple variables, which itself generalizes the derivative of a scalar-valued function of a single variable. In other words, the Jacobian for a multivariate function is the gradient. The Jacobian can also be thought of as describing the amount of stretching, rotating or transforming that a transformation imposes locally, for example, if = f is used to transform an image, the Jacobian Jf, describes how the image in the neighborhood of is transformed. If p is a point in ℝn and f is differentiable at p, compare this to a Taylor series for a scalar function of a scalar argument, truncated to first order, f = f + f ′ + o. The Jacobian of the gradient of a function of several variables has a special name, the Hessian matrix. If m=n, then f is a function from ℝn to itself and we can then form its determinant, known as the Jacobian determinant. The Jacobian determinant is occasionally referred to as the Jacobian, the Jacobian determinant at a given point gives important information about the behavior of f near that point. For instance, the differentiable function f is invertible near a point p ∈ ℝn if the Jacobian determinant at p is non-zero. This is the inverse function theorem, furthermore, if the Jacobian determinant at p is positive, then f preserves orientation near p, if it is negative, f reverses orientation
23.
Orthogonal coordinates
–
In mathematics, orthogonal coordinates are defined as a set of d coordinates q = in which the coordinate surfaces all meet at right angles. A coordinate surface for a particular coordinate qk is the curve, surface, orthogonal coordinates are a special but extremely common case of curvilinear coordinates. The chief advantage of non-Cartesian coordinates is that they can be chosen to match the symmetry of the problem, the reason to prefer orthogonal coordinates instead of general curvilinear coordinates is simplicity, many complications arise when coordinates are not orthogonal. For example, in orthogonal coordinates many problems may be solved by separation of variables, separation of variables is a mathematical technique that converts a complex d-dimensional problem into d one-dimensional problems that can be solved in terms of known functions. Many equations can be reduced to Laplaces equation or the Helmholtz equation, Laplaces equation is separable in 13 orthogonal coordinate systems, and the Helmholtz equation is separable in 11 orthogonal coordinate systems. Orthogonal coordinates never have off-diagonal terms in their metric tensor and these scaling functions hi are used to calculate differential operators in the new coordinates, e. g. the gradient, the Laplacian, the divergence and the curl. A simple method for generating orthogonal coordinates systems in two dimensions is by a mapping of a standard two-dimensional grid of Cartesian coordinates. A complex number z = x + iy can be formed from the coordinates x and y. However, there are other orthogonal coordinate systems in three dimensions that cannot be obtained by projecting or rotating a system, such as the ellipsoidal coordinates. More general orthogonal coordinates may be obtained by starting with some necessary coordinate surfaces, in Cartesian coordinates, the basis vectors are fixed. What distinguishes orthogonal coordinates is that, though the basis vectors vary, note that the vectors are not necessarily of equal length. The useful functions known as factors of the coordinates are simply the lengths h i of the basis vectors e ^ i. The scale factors are sometimes called Lamé coefficients, but this terminology is best avoided since some more well known coefficients in linear elasticity carry the same name. Components in the basis are most common in applications for clarity of the quantities. The basis vectors shown above are covariant basis vectors, while a vector is an objective quantity, meaning its identity is independent of any coordinate system, the components of a vector depend on what basis the vector is represented in. Note that the summation symbols Σ and the range, indicating summation over all basis vectors, are often omitted. Vector addition and negation are done component-wise just as in Cartesian coordinates with no complication, extra considerations may be necessary for other vector operations. Note however, that all of these operations assume that two vectors in a field are bound to the same point
24.
Physics
–
Physics is the natural science that involves the study of matter and its motion and behavior through space and time, along with related concepts such as energy and force. One of the most fundamental disciplines, the main goal of physics is to understand how the universe behaves. Physics is one of the oldest academic disciplines, perhaps the oldest through its inclusion of astronomy, Physics intersects with many interdisciplinary areas of research, such as biophysics and quantum chemistry, and the boundaries of physics are not rigidly defined. New ideas in physics often explain the mechanisms of other sciences while opening new avenues of research in areas such as mathematics. Physics also makes significant contributions through advances in new technologies that arise from theoretical breakthroughs, the United Nations named 2005 the World Year of Physics. Astronomy is the oldest of the natural sciences, the stars and planets were often a target of worship, believed to represent their gods. While the explanations for these phenomena were often unscientific and lacking in evidence, according to Asger Aaboe, the origins of Western astronomy can be found in Mesopotamia, and all Western efforts in the exact sciences are descended from late Babylonian astronomy. The most notable innovations were in the field of optics and vision, which came from the works of many scientists like Ibn Sahl, Al-Kindi, Ibn al-Haytham, Al-Farisi and Avicenna. The most notable work was The Book of Optics, written by Ibn Al-Haitham, in which he was not only the first to disprove the ancient Greek idea about vision, but also came up with a new theory. In the book, he was also the first to study the phenomenon of the pinhole camera, many later European scholars and fellow polymaths, from Robert Grosseteste and Leonardo da Vinci to René Descartes, Johannes Kepler and Isaac Newton, were in his debt. Indeed, the influence of Ibn al-Haythams Optics ranks alongside that of Newtons work of the same title, the translation of The Book of Optics had a huge impact on Europe. From it, later European scholars were able to build the devices as what Ibn al-Haytham did. From this, such important things as eyeglasses, magnifying glasses, telescopes, Physics became a separate science when early modern Europeans used experimental and quantitative methods to discover what are now considered to be the laws of physics. Newton also developed calculus, the study of change, which provided new mathematical methods for solving physical problems. The discovery of new laws in thermodynamics, chemistry, and electromagnetics resulted from greater research efforts during the Industrial Revolution as energy needs increased, however, inaccuracies in classical mechanics for very small objects and very high velocities led to the development of modern physics in the 20th century. Modern physics began in the early 20th century with the work of Max Planck in quantum theory, both of these theories came about due to inaccuracies in classical mechanics in certain situations. Quantum mechanics would come to be pioneered by Werner Heisenberg, Erwin Schrödinger, from this early work, and work in related fields, the Standard Model of particle physics was derived. Areas of mathematics in general are important to this field, such as the study of probabilities, in many ways, physics stems from ancient Greek philosophy
25.
Geometry
–
Geometry is a branch of mathematics concerned with questions of shape, size, relative position of figures, and the properties of space. A mathematician who works in the field of geometry is called a geometer, Geometry arose independently in a number of early cultures as a practical way for dealing with lengths, areas, and volumes. Geometry began to see elements of mathematical science emerging in the West as early as the 6th century BC. By the 3rd century BC, geometry was put into a form by Euclid, whose treatment, Euclids Elements. Geometry arose independently in India, with texts providing rules for geometric constructions appearing as early as the 3rd century BC, islamic scientists preserved Greek ideas and expanded on them during the Middle Ages. By the early 17th century, geometry had been put on a solid footing by mathematicians such as René Descartes. Since then, and into modern times, geometry has expanded into non-Euclidean geometry and manifolds, while geometry has evolved significantly throughout the years, there are some general concepts that are more or less fundamental to geometry. These include the concepts of points, lines, planes, surfaces, angles, contemporary geometry has many subfields, Euclidean geometry is geometry in its classical sense. The mandatory educational curriculum of the majority of nations includes the study of points, lines, planes, angles, triangles, congruence, similarity, solid figures, circles, Euclidean geometry also has applications in computer science, crystallography, and various branches of modern mathematics. Differential geometry uses techniques of calculus and linear algebra to problems in geometry. It has applications in physics, including in general relativity, topology is the field concerned with the properties of geometric objects that are unchanged by continuous mappings. In practice, this often means dealing with large-scale properties of spaces, convex geometry investigates convex shapes in the Euclidean space and its more abstract analogues, often using techniques of real analysis. It has close connections to convex analysis, optimization and functional analysis, algebraic geometry studies geometry through the use of multivariate polynomials and other algebraic techniques. It has applications in areas, including cryptography and string theory. Discrete geometry is concerned mainly with questions of relative position of simple objects, such as points. It shares many methods and principles with combinatorics, Geometry has applications to many fields, including art, architecture, physics, as well as to other branches of mathematics. The earliest recorded beginnings of geometry can be traced to ancient Mesopotamia, the earliest known texts on geometry are the Egyptian Rhind Papyrus and Moscow Papyrus, the Babylonian clay tablets such as Plimpton 322. For example, the Moscow Papyrus gives a formula for calculating the volume of a truncated pyramid, later clay tablets demonstrate that Babylonian astronomers implemented trapezoid procedures for computing Jupiters position and motion within time-velocity space
26.
Spherical coordinate system
–
It can be seen as the three-dimensional version of the polar coordinate system. The radial distance is called the radius or radial coordinate. The polar angle may be called colatitude, zenith angle, normal angle, the use of symbols and the order of the coordinates differs between sources. In both systems ρ is often used instead of r, other conventions are also used, so great care needs to be taken to check which one is being used. A number of different spherical coordinate systems following other conventions are used outside mathematics, in a geographical coordinate system positions are measured in latitude, longitude and height or altitude. There are a number of different celestial coordinate systems based on different fundamental planes, the polar angle is often replaced by the elevation angle measured from the reference plane. Elevation angle of zero is at the horizon, the spherical coordinate system generalises the two-dimensional polar coordinate system. It can also be extended to spaces and is then referred to as a hyperspherical coordinate system. To define a coordinate system, one must choose two orthogonal directions, the zenith and the azimuth reference, and an origin point in space. These choices determine a plane that contains the origin and is perpendicular to the zenith. The spherical coordinates of a point P are then defined as follows, the inclination is the angle between the zenith direction and the line segment OP. The azimuth is the angle measured from the azimuth reference direction to the orthogonal projection of the line segment OP on the reference plane. The sign of the azimuth is determined by choosing what is a sense of turning about the zenith. This choice is arbitrary, and is part of the coordinate systems definition, the elevation angle is 90 degrees minus the inclination angle. If the inclination is zero or 180 degrees, the azimuth is arbitrary, if the radius is zero, both azimuth and inclination are arbitrary. In linear algebra, the vector from the origin O to the point P is often called the vector of P. Several different conventions exist for representing the three coordinates, and for the order in which they should be written. The use of to denote radial distance, inclination, and azimuth, respectively, is common practice in physics, and is specified by ISO standard 80000-2,2009, and earlier in ISO 31-11
27.
Linear independence
–
These concepts are central to the definition of dimension. A vector space can be of finite-dimension or infinite-dimension depending on the number of independent basis vectors. The definition of linear dependence and the ability to determine whether a subset of vectors in a space is linearly dependent are central to determining a basis for a vector space. Thus, v1 is shown to be a combination of the remaining vectors. This implies that no vector in the set can be represented as a combination of the remaining vectors in the set. In other words, a set of vectors is independent if the only representations of 0 → as a linear combination of its vectors is the trivial representation in which all the scalars ai are zero. In order to allow the number of linearly independent vectors in a space to be countably infinite. More generally, let V be a space over a field K. A set X of elements of V is linearly independent if the corresponding family x∈X is linearly independent. Equivalently, a family is dependent if a member is in the span of the rest of the family. The trivial case of the empty family must be regarded as independent for theorems to apply. A set of vectors which is independent and spans some vector space. For example, the space of all polynomials in x over the reals has the subset as a basis. A geographic example may help to clarify the concept of linear independence, a person describing the location of a certain place might say, It is 3 miles north and 4 miles east of here. This is sufficient information to describe the location, because the geographic coordinate system may be considered as a 2-dimensional vector space, the person might add, The place is 5 miles northeast of here. Although this last statement is true, it is not necessary, in this example the 3 miles north vector and the 4 miles east vector are linearly independent. That is to say, the north vector cannot be described in terms of the east vector, and vice versa. The third 5 miles northeast vector is a combination of the other two vectors, and it makes the set of vectors linearly dependent, that is, one of the three vectors is unnecessary
28.
Levi-Civita symbol
–
It is named after the Italian mathematician and physicist Tullio Levi-Civita. Other names include the symbol, antisymmetric symbol, or alternating symbol. The standard letters to denote the Levi-Civita symbol are the Greek lower case epsilon ε or ϵ, or less commonly the Latin lower case e. Index notation allows one to display permutations in a way compatible with tensor analysis, ε i 1 i 2 … i n where each index i1, i2, …, there are nn indexed values of εi1i2…in, which can be arranged into an n-dimensional array. The key definitive property of the symbol is total antisymmetry in all the indices, when any two indices are interchanged, equal or not, the symbol is negated, ε … i p … i q … = − ε … i q … i p …. If any two indices are equal, the symbol is zero, the value ε12…n must be defined, else the particular values of the symbol for all permutations are indeterminate. Most authors choose ε12…n = +1, which means the Levi-Civita symbol equals the sign of a permutation when the indices are all unequal and this choice is used throughout this article. The values of the Levi-Civita symbol are independent of any metric tensor, also, the specific term symbol emphasizes that it is not a tensor because of how it transforms between coordinate systems, however it can be interpreted as a tensor density. The Levi-Civita symbol allows the determinant of a matrix. The three- and higher-dimensional Levi-Civita symbols are used more commonly, in three dimensions only, the cyclic permutations of are all even permutations, similarly the anticyclic permutations are all odd permutations. This means in 3d it is sufficient to take cyclic or anticyclic permutations of and easily obtain all the even or odd permutations. Analogous to 2-dimensional matrices, the values of the 3-dimensional Levi-Civita symbol can be arranged into a 3 ×3 ×3 array, the formula is valid for all index values, and for any n. However, computing the formula above naively is O in time complexity, a tensor whose components in an orthonormal basis are given by the Levi-Civita symbol is sometimes called a permutation tensor. It is actually a pseudotensor because under a transformation of Jacobian determinant −1. As the Levi-Civita symbol is a pseudotensor, the result of taking a product is a pseudovector. Under a general coordinate change, the components of the permutation tensor are multiplied by the Jacobian of the transformation matrix. This implies that in coordinate frames different from the one in which the tensor was defined, if the frame is orthonormal, the factor will be ±1 depending on whether the orientation of the frame is the same or not. In index-free tensor notation, the Levi-Civita symbol is replaced by the concept of the Hodge dual, thus, one could write ε i j … k = ε i j … k
29.
Coordinate system
–
The order of the coordinates is significant, and they are sometimes identified by their position in an ordered tuple and sometimes by a letter, as in the x-coordinate. The coordinates are taken to be real numbers in elementary mathematics, the use of a coordinate system allows problems in geometry to be translated into problems about numbers and vice versa, this is the basis of analytic geometry. The simplest example of a system is the identification of points on a line with real numbers using the number line. In this system, an arbitrary point O is chosen on a given line. The coordinate of a point P is defined as the distance from O to P. Each point is given a unique coordinate and each number is the coordinate of a unique point. The prototypical example of a system is the Cartesian coordinate system. In the plane, two lines are chosen and the coordinates of a point are taken to be the signed distances to the lines. In three dimensions, three perpendicular planes are chosen and the three coordinates of a point are the distances to each of the planes. This can be generalized to create n coordinates for any point in n-dimensional Euclidean space, depending on the direction and order of the coordinate axis the system may be a right-hand or a left-hand system. This is one of many coordinate systems, another common coordinate system for the plane is the polar coordinate system. A point is chosen as the pole and a ray from this point is taken as the polar axis, for a given angle θ, there is a single line through the pole whose angle with the polar axis is θ. Then there is a point on this line whose signed distance from the origin is r for given number r. For a given pair of coordinates there is a single point, for example, and are all polar coordinates for the same point. The pole is represented by for any value of θ, there are two common methods for extending the polar coordinate system to three dimensions. In the cylindrical coordinate system, a z-coordinate with the meaning as in Cartesian coordinates is added to the r and θ polar coordinates giving a triple. Spherical coordinates take this a further by converting the pair of cylindrical coordinates to polar coordinates giving a triple. A point in the plane may be represented in coordinates by a triple where x/z and y/z are the Cartesian coordinates of the point
30.
Curvilinear coordinates
–
In geometry, curvilinear coordinates are a coordinate system for Euclidean space in which the coordinate lines may be curved. These coordinates may be derived from a set of Cartesian coordinates by using a transformation that is locally invertible at each point and this means that one can convert a point given in a Cartesian coordinate system to its curvilinear coordinates and back. The name curvilinear coordinates, coined by the French mathematician Lamé, well-known examples of curvilinear coordinate systems in three-dimensional Euclidean space are Cartesian, cylindrical and spherical polar coordinates. A Cartesian coordinate surface in space is a coordinate plane. In the same space, the coordinate surface r =1 in spherical coordinates is the surface of a unit sphere. The formalism of curvilinear coordinates provides a unified and general description of the coordinate systems. Curvilinear coordinates are used to define the location or distribution of physical quantities which may be, for example, scalars, vectors. Such expressions then become valid for any curvilinear coordinate system, depending on the application, a curvilinear coordinate system may be simpler to use than the Cartesian coordinate system. For instance, a problem with spherical symmetry defined in R3 is usually easier to solve in spherical polar coordinates than in Cartesian coordinates. Equations with boundary conditions that follow coordinate surfaces for a particular coordinate system may be easier to solve in that system. One would for instance describe the motion of a particle in a box in Cartesian coordinates. Spherical coordinates are one of the most used curvilinear coordinate systems in fields as Earth sciences, cartography, and physics. A point P in 3d space can be defined using Cartesian coordinates, by r = x e x + y e y + z e z and it can also be defined by its curvilinear coordinates if this triplet of numbers defines a single point in an unambiguous way. The coordinate axes are determined by the tangents to the curves at the intersection of three surfaces. They are not in general fixed directions in space, which happens to be the case for simple Cartesian coordinates, and thus there is generally no natural global basis for curvilinear coordinates. Applying the same derivatives to the curvilinear system locally at point P defines the basis vectors. Such a basis, whose vectors change their direction and/or magnitude from point to point is called a local basis, all bases associated with curvilinear coordinates are necessarily local. Basis vectors that are the same at all points are global bases, note, for this article e is reserved for the standard basis and h or b is for the curvilinear basis
31.
Polar coordinate system
–
The reference point is called the pole, and the ray from the pole in the reference direction is the polar axis. The distance from the pole is called the radial coordinate or radius, the concepts of angle and radius were already used by ancient peoples of the first millennium BC. In On Spirals, Archimedes describes the Archimedean spiral, a function whose radius depends on the angle, the Greek work, however, did not extend to a full coordinate system. From the 8th century AD onward, astronomers developed methods for approximating and calculating the direction to Mecca —and its distance—from any location on the Earth, from the 9th century onward they were using spherical trigonometry and map projection methods to determine these quantities accurately. There are various accounts of the introduction of polar coordinates as part of a coordinate system. The full history of the subject is described in Harvard professor Julian Lowell Coolidges Origin of Polar Coordinates, grégoire de Saint-Vincent and Bonaventura Cavalieri independently introduced the concepts in the mid-seventeenth century. Saint-Vincent wrote about them privately in 1625 and published his work in 1647, Cavalieri first used polar coordinates to solve a problem relating to the area within an Archimedean spiral. Blaise Pascal subsequently used polar coordinates to calculate the length of parabolic arcs, in Method of Fluxions, Sir Isaac Newton examined the transformations between polar coordinates, which he referred to as the Seventh Manner, For Spirals, and nine other coordinate systems. In the journal Acta Eruditorum, Jacob Bernoulli used a system with a point on a line, called the pole, Coordinates were specified by the distance from the pole and the angle from the polar axis. Bernoullis work extended to finding the radius of curvature of curves expressed in these coordinates, the actual term polar coordinates has been attributed to Gregorio Fontana and was used by 18th-century Italian writers. The term appeared in English in George Peacocks 1816 translation of Lacroixs Differential and Integral Calculus, alexis Clairaut was the first to think of polar coordinates in three dimensions, and Leonhard Euler was the first to actually develop them. The radial coordinate is often denoted by r or ρ, the angular coordinate is specified as ϕ by ISO standard 31-11. Angles in polar notation are generally expressed in degrees or radians. Degrees are traditionally used in navigation, surveying, and many applied disciplines, while radians are more common in mathematics, in many contexts, a positive angular coordinate means that the angle ϕ is measured counterclockwise from the axis. In mathematical literature, the axis is often drawn horizontal. Adding any number of turns to the angular coordinate does not change the corresponding direction. Also, a radial coordinate is best interpreted as the corresponding positive distance measured in the opposite direction. Therefore, the point can be expressed with an infinite number of different polar coordinates or
32.
Unit interval
–
In mathematics, the unit interval is the closed interval, that is, the set of all real numbers that are greater than or equal to 0 and less than or equal to 1. In addition to its role in analysis, the unit interval is used to study homotopy theory in the field of topology. In the literature, the unit interval is sometimes applied to the other shapes that an interval from 0 to 1 could take. However, the notation I is most commonly reserved for the closed interval, the unit interval is a complete metric space, homeomorphic to the extended real number line. As a topological space it is compact, contractible, path connected, the Hilbert cube is obtained by taking a topological product of countably many copies of the unit interval. In mathematical analysis, the interval is a one-dimensional analytical manifold whose boundary consists of the two points 0 and 1. Its standard orientation goes from 0 to 1, the unit interval is a totally ordered set and a complete lattice. The size or cardinality of a set is the number of elements it contains, the unit interval is a subset of the real numbers R. However, it has the same size as the whole set, the cardinality of the continuum. Moreover, it has the number of points as a square of area 1, as a cube of volume 1. The number of elements in all the sets is uncountable. The interval, with two, demarcated by the positive and negative units, occurs frequently, such as in the range of the trigonometric functions sine and cosine. This interval may be used for the domain of inverse functions, for instance, when θ is restricted to then sin is in this interval and arcsine is defined there. Sometimes, the unit interval is used to refer to objects that play a role in various branches of mathematics analogous to the role that plays in homotopy theory. For example, in the theory of quivers, the interval is the graph whose vertex set is. One can then define a notion of homotopy between quiver homomorphisms analogous to the notion of homotopy between continuous maps. In logic, the interval can be interpreted as a generalization of the Boolean domain, in which case rather than only taking values 0 or 1. Algebraically, negation is replaced with 1 − x, conjunction is replaced with multiplication, interpreting these values as logical truth values yields a multi-valued logic, which forms the basis for fuzzy logic and probabilistic logic. In these interpretations, a value is interpreted as the degree of truth – to what extent a proposition is true, or the probability that the proposition is true
33.
Unit square
–
In mathematics, a unit square is a square whose sides have length 1. Often, the unit square refers specifically to the square in the Cartesian plane with corners at the four points, and. In a Cartesian coordinate system with coordinates the unit square is defined as the square consisting of the points where x and y lie in a closed unit interval from 0 to 1. That is, the square is the Cartesian product I × I. The unit square can also be thought of as a subset of the complex plane, in this view, the four corners of the unit square are at the four complex numbers 0,1, i, and 1 + i. It is not known whether any point in the plane is a distance from all four vertices of the unit square. However, no point is on an edge of the square. Unit circle Unit sphere Unit cube Weisstein, Eric W. Unit square
34.
International Standard Book Number
–
The International Standard Book Number is a unique numeric commercial book identifier. An ISBN is assigned to each edition and variation of a book, for example, an e-book, a paperback and a hardcover edition of the same book would each have a different ISBN. The ISBN is 13 digits long if assigned on or after 1 January 2007, the method of assigning an ISBN is nation-based and varies from country to country, often depending on how large the publishing industry is within a country. The initial ISBN configuration of recognition was generated in 1967 based upon the 9-digit Standard Book Numbering created in 1966, the 10-digit ISBN format was developed by the International Organization for Standardization and was published in 1970 as international standard ISO2108. Occasionally, a book may appear without a printed ISBN if it is printed privately or the author does not follow the usual ISBN procedure, however, this can be rectified later. Another identifier, the International Standard Serial Number, identifies periodical publications such as magazines, the ISBN configuration of recognition was generated in 1967 in the United Kingdom by David Whitaker and in 1968 in the US by Emery Koltay. The 10-digit ISBN format was developed by the International Organization for Standardization and was published in 1970 as international standard ISO2108, the United Kingdom continued to use the 9-digit SBN code until 1974. The ISO on-line facility only refers back to 1978, an SBN may be converted to an ISBN by prefixing the digit 0. For example, the edition of Mr. J. G. Reeder Returns, published by Hodder in 1965, has SBN340013818 -340 indicating the publisher,01381 their serial number. This can be converted to ISBN 0-340-01381-8, the check digit does not need to be re-calculated, since 1 January 2007, ISBNs have contained 13 digits, a format that is compatible with Bookland European Article Number EAN-13s. An ISBN is assigned to each edition and variation of a book, for example, an ebook, a paperback, and a hardcover edition of the same book would each have a different ISBN. The ISBN is 13 digits long if assigned on or after 1 January 2007, a 13-digit ISBN can be separated into its parts, and when this is done it is customary to separate the parts with hyphens or spaces. Separating the parts of a 10-digit ISBN is also done with either hyphens or spaces, figuring out how to correctly separate a given ISBN number is complicated, because most of the parts do not use a fixed number of digits. ISBN issuance is country-specific, in that ISBNs are issued by the ISBN registration agency that is responsible for country or territory regardless of the publication language. Some ISBN registration agencies are based in national libraries or within ministries of culture, in other cases, the ISBN registration service is provided by organisations such as bibliographic data providers that are not government funded. In Canada, ISBNs are issued at no cost with the purpose of encouraging Canadian culture. In the United Kingdom, United States, and some countries, where the service is provided by non-government-funded organisations. Australia, ISBNs are issued by the library services agency Thorpe-Bowker